<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/" xmlns:l="http://purl.org/rss/1.0/modules/link/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
 <!-- Generated by Ektron CMS400.NET -->
 <channel rdf:about="https://www.construx.com/PageTemplates/BlogPostDetail.aspx?blogid=23485">
  <title>10x Software Development</title>
  <link>https://www.construx.com/PageTemplates/BlogPostDetail.aspx?blogid=23485</link>
  <description></description>
  <dc:date>2018-09-14T18:47:41.9778750Z</dc:date>
  <dc:language>en-US</dc:language>
  <items>
   <rdf:Seq>
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New_OnDemand_Course_-_10x_Software_Development,_2nd_Edition/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New_OnDemand_Course_-_Scrum_Overview/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Agile_Transformation_-_Keys_to_Success/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/17_Theses_on_Software_Estimation_(Expanded)/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Understanding_Software_Project_Size_-_New_Lecture_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/17_Theses_on_Software_Estimation/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/_NoEstimates_-_Response_to_Ron_Jeffries/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/_NoEstimates/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Human_Variation_Introduction_-_New_Lecture_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Debian_Size_Claims_-_New_Lecture_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Succeeding_with_Geographically_Distributed_Scrum_Teams_-_New_White_Paper/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Selecting_an_Iteration_Approach_-_New_Lecture_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/The_Lifecycle_Model_Applied_to_Common_Methodologies_-_New_Lecture_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Using_Lines_of_Code_as_a_Software_Size_Measure_-_New_Lecture_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Team_Sizes_and_Schedule_Basics_-_New_Lectures_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Variations_in_Iteration_-_New_Lecture_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New___Understanding_Software_Projects___Lectures_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Understanding_Software_Projects_Lecture_Series/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Scrum_Chickens_and_Pigs/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Scrum_Trainer_/_Senior_Fellow_Position_Available/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/2013_ECSE_Discussion_Topics_Posted/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Software_Project_Archaeology/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New_White_Papers_Now_Available/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Construx_Executive_Summit_2012/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Technical_Debt_Webinar_Archive_Version_Now_Available/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Managing_Technical_Debt__Free_Webinar/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Construx_Executive_Summit_2011__Software_Thought_Leaders/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/10_Deadly_Sins_of_Software_Estimation__Free_Webinar/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/I_will_be_Giving_a_Keynote_at_the_Scrum_Alliance_Scrum_Gathering_May_17_2011/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New_Software_Estimation_Survey/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/My_Books_Are_Now_Available_in_Kindle,_PDF,_and_Other_Electronic_Formats/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/Why_Didnot_I_Like_The_Social_Network/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Technical_Debt_Webinar_Recording_is_Now_Available/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/2011_Executive_Discussion_Topics_Announced/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/10x_Productivity_Myths__Where_s_the_10x_Difference_in_Compensation_/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Upcoming_Free_Webinar__A_Technical_Debt_Roadmap/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Origins_of_10X_–_How_Valid_is_the_Underlying_Research_/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Construx_Job_Opening__Software_Development_Trainer-Consultant/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/2010_ECSE_Meeting_Topics_Announced/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Why_Requirements_Weren_t_More_Prominent_in_Construx_s_Classic_Mistakes_Survey/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Travel_Restrictions_and_Offshore_Development/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/State_of_the_Practice_Survey/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Facebook_Page/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Free_Webinar__10_Deadly_Sins_of_Software_Estimation/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Next_Generation_Project_Planning_Tool__LiquidPlanner_2_0/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Construx_Offers_Free_Training_for_Laid-Off_Software_Workers/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/2009_ECSE_Meeting_Topics_Announced/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New_White_Papers/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/In_Defense_of_the_Bill_Gates_/_Jerry_Seinfeld_Ad__2/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Software_Executive_Summit_2008_Rapidly_Approaching/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Agile_Software__Business_Impact_and_Business_Benefits/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New_Software_Executive_Summit_Speaker/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Software_Executive_Summit_Details_Announced;_Early_Registration_Incentive/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/New_Software_Executive_Report_Available__Managing_Core_Development/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Software_s_Classic_Mistakes--2008/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Measuring_Productivity_of_Individual_Programmers/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Chief_Programmer_Team_Update/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/How_to_Scale_Up_Quickly/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Software_Development_Seminars_in_New_York_City/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Technical_Debt_Decision_Making/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Technical_Debt/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/5_Questions_on_Agile_Development/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Building_a_Fort__Lessons_in_Software_Estimation/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Industry_Benchmarks_About_Hours_Worked_Per_Week/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/How_to_Self-Study_for_a_Computer_Programming_Job/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Best_Companies_to_Work_For,_Part_2/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Best_Companies_to_Work_For,_Part_1/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Rumors_of_Software_Engineerings_Death_are_Greatly_Exaggerated_(aka_Software_Engineering_Ignorance,_Part_II)/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Software_Engineering_Ignorance/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Classic_Mistakes_Updated/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Estimation_of_Outsourced_Projects/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Software_Compensation_2007--Is_it_1999_All_Over_Again_/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Cone_of_Uncertainty_Controversy/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Is_Faster_Always_Faster_/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/Thinking_About_Software_Executives/?blogid=23485" />
    <rdf:li rdf:resource="https://www.construx.com/10x_Software_Development/The_Existential_Pleasures_of_Blogging/?blogid=23485" />
   </rdf:Seq>
  </items>
 </channel>
 <item rdf:about="/10x_Software_Development/New_OnDemand_Course_-_10x_Software_Development,_2nd_Edition/?blogid=23485">
  <title>New OnDemand Course - 10x Software Development, 2nd Edition</title>
  <link>https://www.construx.com/10x_Software_Development/New_OnDemand_Course_-_10x_Software_Development,_2nd_Edition/?blogid=23485</link>
  <description><![CDATA[New OnDemand Course  10x Software Development, Second Edition    How do you maximize team productivity? Decades of research have found at least a ten fold—“10x”—difference in productivity and quality between the best teams and the worst.]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2017-04-26T10:54:46Z</dc:date>
  <content:encoded><![CDATA[<p>How do you maximize team productivity? Decades of research have found at least a ten-fold—“10x”—difference in productivity and quality between the best teams and the worst. The studies have collectively involved hundreds of professional programmers across a spectrum of programming activities. Specific differences range from about 5:1 to about 25:1, and in my judgment, that collectively supports the 10x claim. Moreover, the research finding is consistent with my experience, in which I have personally observed 10x differences (or more) between different programmers.<br /><br /> Fully updated from beginning to end, our  <a href="http://ondemand.construx.com/online-course/10x-software-development-second-edition/" title="10x Software Development, Second Edition online course">10x Software Development, Second Edition</a> online course describes the Eight Key Principles of 10x software development—how the most effective teams approach their work. The principles are:&#160;</p>
<ul>
<li>Avoid minus-x software development</li>
<li>Set direction</li>
<li>Attack uncertainty</li>
<li>Tailor the solution to the problem</li>
<li>Seek ground truth</li>
<li>Make decisions with data</li>
<li>Minimize unintentional rework</li>
<li>Grow capability</li>
</ul>
<p>You’ll gain a deep understanding of these principles in this course, and you’ll learn specific tactics for turning your team into a 10x team.<br /><br /> New for the second edition are multiple activities to deepen your learning experience, including case studies, exercises, and quizzes; a reassessment and refreshing of every lesson in the course via full in-studio production (no “voice over PowerPoint”); and the addition of tactic-specific resources to help you take your learning beyond our course. There’s literally nothing about this course that we haven’t improved!<br /><br /> Because 10x software development requires all roles to be strong, this course is appropriate for Managers, Technical Leads, Quality Leads, Test Leads, Developers, Testers, and other software project stakeholders. In other words, this is a good course for software development teams as well as individual practitioners.<br /><br /> After you complete this course, you will be able to:&#160;</p>
<ul>
<li>Apply tactics to address the classic mistakes your team is making</li>
<li>Identify the development fundamentals you need to grow</li>
<li>Make decisions that will stick</li>
</ul>
<p>After your team completes this course, it will be able to:</p>
<ul>
<li>Confirm that you are all aligned on the project’s objectives</li>
<li>Match your development lifecycle to your work rather than the other way around</li>
<li>Apply risk management appropriately</li>
<li>Plan the right kind of early defect detection</li>
<li>Review and enhance your feedback loops</li>
</ul>
<p>If you’re not already a member of Construx OnDemand,  <a href="https://cxlearn.com/user/login/sku/AAP30Fre" title="start a free trial today">start a free trial today</a> and take your first steps toward 10x excellence!</p>
<p>For a description of the body of research proving the existence of the 10x phenomenon, see my earlier blog post  <a href="http://www.construx.com/10x_Software_Development/Origins_of_10X_%E2%80%93_How_Valid_is_the_Underlying_Research_/" title="“Origins of 10X – How Valid is the Underlying Research?”">“Origins of 10X – How Valid is the Underlying Research?”</a></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/New_OnDemand_Course_-_Scrum_Overview/?blogid=23485">
  <title>New OnDemand Course - Scrum Overview</title>
  <link>https://www.construx.com/10x_Software_Development/New_OnDemand_Course_-_Scrum_Overview/?blogid=23485</link>
  <description><![CDATA[We're happy to announce a new course  <i>Scrum Overview</i>. Scrum enables software development teams to address complex adaptive problems, and in this course you’ll learn Scrum’s]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2017-02-14T17:23:19Z</dc:date>
  <content:encoded><![CDATA[<p>Hello, Steve McConnell here. We are always updating our Construx OnDemand training, and I’m happy to announce a new course: <i><a href="http://ondemand.construx.com/online-course/scrum-overview" title="Scrum Overview" target="_blank">Scrum Overview</a></i>. Scrum enables software development teams to address complex adaptive problems, and in this course you’ll learn Scrum’s elements—its roles, events, and artifacts—as well as their purpose: how they’re all bound together to make Scrum hum. In short, you’ll learn what’s different about Scrum and what makes it so effective.<br /><br />With the myriad Scrum courses available today, why would you take a course called <a href="http://ondemand.construx.com/online-course/scrum-overview" title="Scrum Overview" style="font-style: italic;">Scrum Overview</a>? If you’re a nontechnical stakeholder, such as an executive or sales and marketing staff or even a product owner, you might find value in getting an introduction to Scrum without diving into all the details, and the context that’s set in this course can very valuable. Even if you are a practitioner and you expect to get into the in’s and out’s of Scrum on a daily basis, it can useful to start with the context of Scrum so that you understand its full scope before diving into all those implementation details. Either way this can be a valuable introduction and overview of Scrum. <br /><br />Your instructor in this course is <b>Jenny Stuart</b>. Jenny has been Vice President of Consulting at Construx for more than 10 years, and in that role she has overseen Construx’s work in implementing Scrum, and implementing Agile practices more generally, in organizations throughout North America and around the world. Jenny has worked effectively with individual contributors on Scrum teams, with Scrum Masters, and with executives who are supporting Scrum at the organizational level, and she has learned numerous lessons about how to support Scrum effectively by working at all those different levels. You’re in good hands with Jenny as the leader of this course.<br /></p>
<ul>
<li><b>Here’s the top-level outline for the course:</b></li>
<li><b>Introduction</b></li>
<li><b>The Scrum Framework</b></li>
<li><b>Scrum Roles</b></li>
<li><b>Requirements in Scrum</b></li>
<li><b>Agile Estimation</b></li>
<li><b>Plan a Sprint</b></li>
<li><b>Execute a Sprint</b></li>
<li><b>Wrap Up a Sprint</b></li>
<li><b>Adoption Pitfalls</b></li>
<li><b>Conclusion</b></li>
</ul>
<div>If you want to go deeper with Scrum, follow this course with Construx OnDemand’s <a href="http://ondemand.construx.com/online-course/scrum-boot-camp/" title="Scrum Boot Camp" target="_blank" style="font-style: italic;">Scrum Boot Camp</a>, which will teach you what you need to know to become certified in Scrum.<br /><br />If you’re a Scrum Product Owner or want to become one, <a href="http://ondemand.construx.com/online-course/pobc/" title="Product Owner Boot Camp" target="_blank" style="font-style: italic;">Product Owner Boot Camp</a> drills down into the details you need to successfully plan releases, reflect stakeholder priorities, ensure the team builds the right product, and communicate with project stakeholders.<br /><br />For more on developing requirements in Agile scenarios such as Scrum, see <a href="http://ondemand.construx.com/online-course/agile-requirements/" title="Agile Requirements Boot Camp" target="_blank" style="font-style: italic;">Agile Requirements Boot Camp</a>, which teaches you how to use story mapping to define project scope, write user stories, size stories (agile estimation), and develop acceptance criteria for user stories.<br /><br /><a href="https://cxlearn.vueocity.com/user/login/sku/AAP30Fre" title="Try Construx OnDemand for free today" target="_blank">Try Construx OnDemand for free today</a> (no credit card required), and enjoy the course!</div>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Agile_Transformation_-_Keys_to_Success/?blogid=23485">
  <title>Agile Transformation - Keys to Success</title>
  <link>https://www.construx.com/10x_Software_Development/Agile_Transformation_-_Keys_to_Success/?blogid=23485</link>
  <description><![CDATA[I wanted to let you know that I've posted a two part series on Construx's experience with Agile Transformations, pitfalls, keys to success, and so on.&#160;   The videos focus on two models that describe what we have seen]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2016-03-09T14:44:29Z</dc:date>
  <content:encoded><![CDATA[<p>I wanted to let you know that I've posted a two-part series on Construx's experience with Agile Transformations, pitfalls, keys to success, and so on.&#160;</p>
<p>The videos focus on two models that describe the transformation issues we have seen on the ground. You might have seen one or both of the models before, but they aren't often applied specifically to Agile adoption work. The focus of the videos is on showing how these general models specifically apply to Agile transformations. We have found that these models predict very well the challenges to expect in a transformation initiative and contain good insights into how to successfully overcome the challenges.&#160;</p>
<p>Part 1: Agile Transformation - <a href="https://youtu.be/YqAYJASbze4?list=PLwg-V1fR_cxvoF8eF5nkE7DrnVN4D7TNb" title="Change Model">Change Model</a></p>
<p>Part 2: Agile Transformation - <a href="https://youtu.be/4xzHTF27Dog" title="Adoption Model">Adoption Model</a></p>
<p>Check out the talks!</p>
<p><br /><a href="https://youtu.be/YqAYJASbze4?list=PLwg-V1fR_cxvoF8eF5nkE7DrnVN4D7TNb" title="Agile Transformation - Keys to Success - Two Part Series by Steve McConnell"><img src="https://www.construx.com/uploadedImages/AgileTransformation.jpg" alt="Agile Transformation - Keys to Success - Two Part Series by Steve McConnell" title="Agile Transformation - Keys to Success - Two Part Series by Steve McConnell" border="0" /></a></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/17_Theses_on_Software_Estimation_(Expanded)/?blogid=23485">
  <title>17 Theses on Software Estimation (Expanded)</title>
  <link>https://www.construx.com/10x_Software_Development/17_Theses_on_Software_Estimation_(Expanded)/?blogid=23485</link>
  <description><![CDATA[This post is part of an ongoing discussion with Ron Jeffries, which originated as a comment on #NoEstimates. You can read my original "17 Theses" post here. That post has been completely subsumed in this post if you want to]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-08-18T16:51:33Z</dc:date>
  <content:encoded><![CDATA[<p><i>This post is part of an ongoing discussion with Ron Jeffries, which originated from some comments I made about #NoEstimates. You can read my original "17 Theses on Software Estimation" post <a href="http://www.construx.com/10x_Software_Development/17_Theses_on_Software_Estimation/" title="here">here</a>. That post has been completely subsumed by this post if you want to just read this one. You can read Ron's response to my original 17 Theses article&#160;<a href="http://ronjeffries.com/articles/015-aug/est-mcc-again/" title="here">here</a>. This post doesn't respond to Ron's post per se. It has been expanded to address points he raised, but responses to him are&#160;</i><i>more implicit than explicit.&#160;</i></p>
<p>Arriving late to the #NoEstimates discussion, I’m amazed at some of the assumptions that have gone unchallenged, and I’m also amazed at the absence of some fundamental points that no one seems to have made so far. The point of this article is to state unambiguously what I see as the arguments in favor of estimation in software and put #NoEstimates in context. &#160;</p>
<p><b>1.&#160;Estimation is often done badly and ineffectively and in an overly time-consuming way.&#160;</b></p>
<p>My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that “estimation is often done badly” is a true observation of the state of the practice.&#160;</p>
<p><b>2.&#160;The root cause of poor estimation is usually lack of estimation skills.&#160;</b></p>
<p>Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training.&#160;</p>
<p>One of the most common estimation problems is people engaging with so-called estimates that are not really Estimates, but that are really Business Targets or requests for Commitments. You can read more about that in my estimation book or watch my <a href="https://youtu.be/FY9X21HA02w?list=PLwg-V1fR_cxvoF8eF5nkE7DrnVN4D7TNb" title="short&#160;video">short&#160;video</a> on Estimates, Targets, and Commitments.&#160;</p>
<p><b>3.&#160;Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge.&#160;</b></p>
<p>I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples</p>
<p>(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of “definition of forecast”.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned.&#160;</p>
<p>(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting.&#160;</p>
<p>(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. Good estimation is empirically based, so that argument exposes a lack of basic understanding of the nature of estimation.&#160;</p>
<p>(d) Is counting the number of stories completed in each sprint rather than story points, calculating the average number of stories completed each sprint, and using that for sprint planning, estimation? Yes, for the same reasons listed in point (c).&#160;</p>
<p>(e) Most of the #NoEstimates approaches that have been proposed, including (c) and (d) above, are approaches that were defined in my book Software Estimation: Demystifying the Black Art, published in 2006. The fact that people people are claiming these long-ago-published techniques as "new" under the umbrella of #NoEstimates is another reason I say many of the #NoEstimates comments demonstrate a lack of basic software estimation knowledge.&#160;</p>
<p>(f) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on ineffective activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse estimates than they could be getting.</p>
<p>(g) Is it possible to get good estimates? Absolutely. We have worked with multiple companies that have gotten to the point where they are delivering 90%+ of their projects on time, on budget, with intended functionality.&#160;</p>
<p>One reason many people find estimation discussions (aka negotiations) challenging is that they don't really believe the estimates they came up with themselves. Once you develop the skill needed to estimate well -- as well as getting clear about whether the business is really talking about an estimate, a target, or a commitment -- estimation discussions become more collaborative and easier.&#160;</p>
<p><b>4.&#160;Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project.&#160;</b></p>
<p>“Estimation often doesn't work very well, therefore software professionals should not develop estimation skill” – this is a common line of reasoning in #NoEstimates. This argument doesn't make any more sense than the argument, "Scrum often doesn't work very well, therefore software professionals should not try to use Scrum." The right response in both cases is, "Get better at the practice," not "Throw out the practice altogether."&#160;</p>
<p>#NoEstimates advocates say they're just exploring the contexts in which a person or team might be able to do a project without estimating. That exploration is fine, but until someone can show that the vast majority of projects do not need estimates at all, deciding to not estimate and not develop estimations skills is premature. And my experience tells me that when all the dust settles, the cases in which no estimates are needed will be the exception rather than the rule. Thus software professionals will benefit -- and their organizations will benefit -- from developing skill at estimation.&#160;</p>
<p>I would go further and say that a true software professional should develop estimation skill so that you can estimate competently on the numerous projects that require estimation. I don't make these claims about software professionalism lightly. I spent four years as chair of the IEEE committee that oversees software professionalism issues for the IEEE, including overseeing the Software Engineering Body of Knowledge, university accreditation standards, professional certification programs, and coordination with state licensing bodies. I spent another four years as vice-chair of that committee. I also wrote a book on the topic, so if you're interested in going into detail on software professionalism, you can check out my book, <a href="http://www.amazon.com/Professional-Software-Development-Schedules-Successful/dp/0321193679/ref=asap_bc?ie=UTF8" title="Professional Software Development.">Professional Software Development.</a>&#160;Or you can check out a much briefer, more specific explanation in my company's white paper about our <a href="http://www.construx.com/uploadedFiles/Construx/Construx_Content/Resources/White_Papers/Construx%20Professional%20Dev%20Ladder.pdf" title="Professional Development Ladder">Professional Development Ladder</a>.&#160;</p>
<p><b>5.&#160;Estimates serve numerous legitimate, important business purposes.</b></p>
<p>Estimates are used by businesses in numerous ways, including:&#160;</p>
<ul>
<li>Allocating budgets to projects (i.e., estimating the effort and budget of each project)</li>
<li>Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)</li>
<li>Deciding which projects get funded and which do not, which is often based on cost/benefit</li>
<li>Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year</li>
<li>Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget</li>
<li>Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project</li>
<li>Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area</li>
<li>Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)</li>
<li>Making commitments to internal business partners (based on projects’ estimated availability dates)</li>
<li>Making commitments to the marketplace (based on estimated release dates)</li>
<li>Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)</li>
<li>Tracking project progress (comparing actual progress to planned (estimated) progress)</li>
<li>Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)</li>
<li>Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)</li>
</ul>
<p>These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove estimates for each of these purposes.</p>
<p>The #NoEstimates response to these business needs is typically of the form, “Estimates are inaccurate and therefore not useful for these purposes” rather than, “The business doesn’t need estimates for these purposes.”&#160;</p>
<p>That argument really just says that businesses are currently operating on the basis of much worse predictions than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger.&#160;</p>
<p>The other #NoEstimates response is that "Estimates are always waste." I don't agree with that. By that line of reasoning, daily stand ups are waste. Sprint planning is waste. Retrospectives are waste. Testing is waste. Everything but code-writing itself is waste. I realize there are Lean purists who hold those views, but I don't buy any of that.&#160;</p>
<p>Estimates, done well, support business decision making, including the decision not to do a project at all. Taking the #NoEstimates philosophy to its logical conclusion, if #NoEstimates eliminates waste, then #NoProjectAtAll eliminates even more waste. In most cases, the business will need an estimate to decide not to do the project at all. &#160;</p>
<p>In my experience businesses usually value predictability, and in many cases, they value predictability more than they value agility. Do businesses <i>always </i>need predictability? No, there are few absolutes in software. Do businesses <i>usually </i>need predictability? In my experience, yes, and they need it often enough that doing it well makes a positive contribution to the business. Responding to change is <i>also </i>usually needed, and doing it well <i>also </i>makes a positive contribution to the business. This whole topic is a case where <i>both </i>predictability <i>and </i>agility&#160;work better than <i>either/or</i>. Competency in estimation should be part of the definition of a true software professional, as should skill in Scrum and other agile practices.&#160;</p>
<b>6.&#160;Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates.&#160;</b> <p>One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we’re estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates—the most common confusion I’ve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill.&#160;</p>
<p>You can see a summary of estimation techniques and their areas of applicability <a href="http://www.stevemcconnell.com/EstimationQuickReference.pdf" title="here">here</a>. This quick reference sheet assumes familiarity with concepts and techniques from my estimation book and is not intended to be intuitive on its own. But just looking at the categories you can see that different techniques apply for estimating size, effort, schedule, and features. Different techniques apply for small, medium, and large projects. Different techniques apply at different points in the software lifecycle, and different techniques apply to Agile (iterative) vs. Sequential projects. Effective estimation requires that the right kind of technique be applied to each different kind of estimate.&#160;</p>
<p>Learning these techniques is not hard, but it isn't intuitive. Learning when to use each technique, as well as learning each technique, requires some professional skills development.&#160;</p>
<p>When we separate the kinds of estimates we can see parts of projects where estimates are not needed. One of the advantages of Scrum is that it eliminates the need to do any sort of miniature milestone/micro-stone/task-based estimates to track work inside a sprint. If I'm doing sequential development without Scrum, I need those detailed estimates to plan and track the team's work. If I'm using Scrum, once I've started the sprint I don't need estimation to track the day-to-day work, because I know where I'm going to be in two weeks and there's no real value added by predicting where I'll be day-by-day within that two week sprint.&#160;</p>
<p>That doesn't eliminate the need for estimates in Scrum entirely, however. I still need an estimate during sprint planning to determine how much functionality to commit to for that sprint. Backing up earlier in the project, before the project has even started, businesses need estimates for all the business purposes described above, including deciding whether to do the project at all. They also need to decide how many people to put on the project, how much to budget for the project, and so on. Treating all the requirements as emergent on a project is fine for some projects, but you still need to decide whether you're going to have a one-person team treating requirements as emergent, or a five-person team, or a 50-person team. Defining team size in the first place requires estimation.&#160;</p>
<b>7.&#160;Estimation and planning are not the same thing, and you can estimate things that you can’t plan.&#160;</b> <p>Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about “how” and estimation is about “how much.”&#160;</p>
<p>Can I “estimate” a chess game, if by “estimate” I mean how each piece will move throughout the game? No, because that isn’t estimation; it’s planning; it’s “how.”</p>
<p>Can I estimate a chess game in the sense of “how much”? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game.&#160;</p>
<p>More to the point, estimating an individual software project is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved.&#160;</p>
<p>This all goes back to the idea that we need estimates for different purposes at different points in a project. An agile project may be about "steering" rather than estimating once the project gets underway. But it may not be allowed to get underway in the first place if there aren't early estimates that show there's a business case for doing the project.&#160;</p>
<b>8.&#160;You can estimate what you don’t know, up to a point.&#160;</b> <p>In addition to estimating “how much,” you can also estimate “how uncertain.” In the #NoEstimates discussions, people throw out lots of examples along the lines of, “My project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.” This is essentially a description of the common estimation mistake of allowing high variability in one area to insert high variability into the whole project's estimate rather than just that one area's estimate.&#160;</p>
<p>Most projects contain a mix of precedented and unprecedented work (also known as certain/uncertain, high risk/low risk, predictable/unpredictable, high/low variability--all of which are loose synonyms as far as estimation is concerned). Decomposing the work, estimating uncertainty in each area, and building up an overall estimate that includes that uncertainty <i>proportionately </i>is one technique for dealing with uncertainty in estimates.&#160;</p>
<p>Why would that ever be needed? Because a business that perceives a whole project as highly risky might decide not to approve the whole project. A business that perceives a project as low to moderate risk overall, with selected areas of high risk, might decide to approve that same project.&#160;</p>
<b>9.&#160;Both estimation and control are needed to achieve predictability.&#160;</b> <p>Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control.&#160;</p>
<p>To put this in Agile Manifesto-like terms:</p>
<p style="text-align: center;">We have come to value project control over project estimation,&#160;<br />as a means of achieving predictability</p>
<p style="text-align: center;">As in the Agile Manifesto, we value both terms, which means we still value the term on the right.&#160;</p>
<p>#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is another case where I believe the right answer is <i>both/and</i>, not <i>either/or</i>.&#160;</p>
<p>I wrote an essay when I was Editor in Chief of IEEE Software called "<a href="http://www.stevemcconnell.com/ieeesoftware/eic11.htm" title="Sitting on the Suitcase">Sitting on the Suitcase</a>" that discussed the interplay between estimation and control and discussed why we estimate even though we know the activity has inherent limitations. This is still one of my favorite essays.&#160;</p>
<b>10.&#160;People use the word "estimate" sloppily.&#160;</b> <p>No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word “estimate” to refer to what would more properly be called a “planning target” or “commitment.” &#160;</p>
<p>The word "estimate" does have a clear definition, for those who want to look it up. &#160;</p>
<ul>
<li><a href="http://dictionary.reference.com/browse/estimate" title="Dictionary.com">Dictionary.com</a></li>
<li><a href="http://www.merriam-webster.com/dictionary/estimate" title="Merriam-Webster">Merriam-Webster</a></li>
<li><a href="http://www.thefreedictionary.com/estimate" title="The Free Dictionary">The Free Dictionary</a>&#160;&#160;</li>
</ul>
<p>The gist of these definitions is that an "estimate" is something that is approximate, rough, or tentative, and is based upon impressions or opinion. People don't always use the word that way, and you can see my video on that topic&#160;<a href="https://youtu.be/FY9X21HA02w?list=PLwg-V1fR_cxvoF8eF5nkE7DrnVN4D7TNb" title="here">here</a>.&#160;</p>
<p>Because people use the word sloppily, one common mistake software professionals make is trying to create a predictive, approximate estimate when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word “estimate” to ask for that. It's common for businesses to think they have a problem with estimation when the bigger problem is with their commitment process.&#160;</p>
<p>We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively.&#160;</p>
<b>11.&#160;Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills.&#160;</b> <p>A common refrain in Agile development is “It’s impossible to get good requirements,” and that statement has never been true. I agree that it’s impossible to get <i>perfect </i>requirements, but that isn’t the same thing as getting <i>good </i>requirements. I would agree that “It is impossible to get good requirements if you don’t have very good requirement skills,” and in my experience that is a common case. &#160;I would also agree that “Projects usually don’t have very good requirements,” as an empirical observation—but not as a normative statement that we should accept as inevitable.&#160;</p>
<p>Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both.&#160;</p>
<p>If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum. </p>
<p>From my point of view, I often see agile-related claims that look kind of like this, <i>What practices should you use if you have:&#160;</i></p>
<ul>
<li><i>Mediocre skill in Estimation</i></li>
<li><i>Mediocre skill in Requirements</i></li>
<li><i>Good to excellent skill in Scrum and Related Practices</i></li>
</ul>
<p>Not too surprisingly, the answer to this question is, <i>Scrum and Related Practices</i>. I think a more interesting question is, <i>What practices should you use if you have:&#160;</i></p>
<ul>
<li><i>Good to excellent skill in Estimation</i></li>
<li><i>Good to excellent skill in Requirements</i></li>
<li><i>Good to excellent skill in Scrum and related practices</i></li>
</ul>
<p>Having competence in multiple areas opens up some doors that will be closed with a lesser skill set. In particular, it opens up the ability to favor predictability if your business needs that, or to favor flexibility if your business needs that. Agile is supposed to be about options, and I think that includes the option to develop in the way that best supports the business.&#160;</p>
<p><b>12.&#160;The typical estimation context involves moderate volatility and a moderate levels of unknowns</b></p>
<p>Ron Jeffries <a href="http://ronjeffries.com/articles/015-jul/mcconnell-2b/" title="writes">writes</a>, “It is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, …, and are therefore capable of being more or less readily estimated by following your favorite book.” I don’t know who said that, but it wasn’t me, and I agree with Ron that that statement doesn’t describe most of the projects that I have seen.&#160;</p>
<p>I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature." In other words, software projects are challenging, but the challenge level is manageable. If you have developed the full set of skills a software professional should have, you will be able to overcome most of the challenges or all of them.&#160;</p>
<p>Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common.&#160;</p>
<b>13.&#160;Responding to change over following a plan does not imply not having a plan.&#160;</b> <p>It’s amazing that in 2015 we’re still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process—typically Scrum—but no plan beyond instantiating Scrum.&#160;</p>
<p>According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. The Agile Manifesto says, "there is value in the items on the right" which includes the phrase "following a plan."&#160;</p>
<p>While I agree that minimizing planning overhead is good project management, doing no planning at all is inconsistent with the Agile Manifesto, not acceptable to most businesses, and wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to <i>respond </i>to change; that doesn’t imply that you need to avoid committing to plans in the first place.&#160;</p>
<p>My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes.&#160;</p>
<p>I've commented in&#160;<a href="https://www.youtube.com/watch?v=5Xwb0X-Obx8" title="other contexts">other contexts</a>&#160;that I have come to the conclusion that most&#160;businesses&#160;<i>would rather be wrong than vague</i>. Businesses prefer to plant a stake in the ground and move it later rather than avoiding planting a stake in the ground in the first place. The assertion that businesses value flexibility over predictability is Agile's great unvalidated assumption. Some businesses do value flexibility over predictability, but most do not. If in doubt, ask your business.&#160;</p>
<p>If your business does value predictability, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind—which it probably will—you’ll still be able to <i>respond to change</i>. Your business will like that even more. &#160;</p>
<b>14.&#160;Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability.&#160;</b> <p>Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that estimation will somehow impair agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well.&#160;</p>
<p>The idea that we have to trade off agility to achieve predictability is a false trade off. If we define "agility" to mean, "no notion of our destination" or "treat all the requirements on the project as emergent," then of course there is a trade off, by definition. If, on the other hand, we define "agility" as "ability to respond to change," then there doesn't have to be any trade off. Indeed, if no one had ever uttered the word “agile” or applied it to Scrum, I would still want to use Scrum because of its support for estimation and predictability, as well as for its support for responding to change.&#160;</p>
<p>The combination of story pointing, velocity calculation,&#160;product backlog,&#160;short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. To put it in estimation terminology, story pointing is a proxy based estimation technique. Velocity is calibrating the estimate with project data. The product backlog (when constructed with estimation in mind) gives us a very good proxy for size. Sprint planning and retrospectives give us the ability to "inspect and adapt" our estimates. All this means that Scrum provides better support for estimation than waterfall ever did.&#160;</p>
<p>If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time.&#160;</p>
<b>15.&#160;There are contexts where estimates provide little value.&#160;</b> <p>I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week.&#160;</p>
<p>The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, “Always do the next most useful thing with the resources available.”&#160;</p>
<p>In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. As the group doing the work expands, they'll need budget and headcount, and those numbers are determined by estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever.&#160;</p>
<b>16.&#160;This is not religion. We need to get more technical and more economic about software discussions.&#160;</b> <p>I’ve seen #NoEstimates advocates treat these questions of requirements quality, estimation effectiveness, agility, and predictability as value-laden moral discussions. "Agile" is a compliment and "Waterfall" is an invective. The tone of the argument is more moral than economic. The arguments are of the form, "Because this practice is <i>good</i>," rather than of the form, "Because this practice supports business goals X, Y, and Z."&#160;</p>
<p>That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. It would be better for the industry at large if people could stay more technical and economic more often.&#160;</p>
<p>Agile is About Creating Options, Right?</p>
<p>I subscribe to the idea that engineering is about doing for a dime what any fool can do for a dollar, i.e., it's about economics. If we assume professional-level skills in agile practices, requirements, <i>and </i>estimation, the decision about how much work to do up front on a project should be an economic decision about which practices will achieve the business goals in the most cost-effective way. We consider issues including the cost of changing requirements and the value of predictability. If the environment is volatile and a high percentage of requirements are likely to spoil before they can be implemented, then it’s a bad economic decision to do lots of up front requirements work. If predictability provides little or no business value, emphasizing up front estimation work would be a bad economic decision.</p>
<p>On the other hand, if predictability does provide business value, then we should support that in a cost-effective way. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, that would be a good economic choice.&#160;</p>
<p>The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt toward Scrum. If my team is great at estimation and requirements but poor at Scrum, the economics will tilt toward estimation and requirements.&#160;</p>
<p>Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the decision to invest in skills development as an economic issue too.&#160;</p>
<p>Decision to Develop Skills is an Economic Decision Too</p>
<p>What is the cost of training staff to reach competency in estimation and requirements? Does the cost of achieving competency exceed the likely benefits that would derive from competency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case.&#160;</p>
<p>My company and I can train software professionals to approach competency in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project.&#160;</p>
<p>My company and I can also train software professionals to approach competency in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But my real recommendation is to "embrace the and" and develop both sets of skills. &#160;</p>
<p>For context about training software professionals to "approach competency" in requirements, estimation, Scrum, and other Agile practices, I am using that term based on work we've done with our &#160;<a href="http://www.construx.com/uploadedFiles/Construx/Construx_Content/Resources/White_Papers/Construx%20Professional%20Dev%20Ladder.pdf" title="Professional Development Ladder">Professional Development Ladder</a>. In that ladder we define capability levels of "Introductory," "Competence," "Leadership," and "Mastery." A few days of classroom training will advance most people beyond Introductory and much of the way toward Competence in a particular skill. Additional hands-on experience, mentoring, and feedback will be needed to cement Competence in an area. Classroom study is just one way to acquire these skills. Self-study or working with an expert mentor can work about as well.&#160;The skills aren't hard to learn, but they aren't self-evident either. As I've said above, the state of the art in estimation, requirements, and agile practices has moved well beyond what even a smart person can discover on their own.&#160;Focused professional development of some kind or other is needed to acquire these skills.&#160;</p>
<p>Is a week enough to accomplish real competency? My company has been training software professionals for almost 20 years, and our consultants have trained upwards of 50,000 software professionals during that time. All of our consultants are highly experienced software professionals first, trainers second. We don't have any methodological ax to grind, so we focus on what is best for each individual client. We all work hands-on with clients so we know what is actually working on the ground and what isn't, and that experience feeds back into our training. We have also also invested heavily in training our consultants to be excellent trainers. As a result, our&#160;<a href="http://www.construx.com/Construx_Service_Quality.aspx?id=16981" title="service quality is second to none">service quality is second to none</a>, and we can make a tremendous amount of progress with a few days of training. Of course additional coaching, mentoring and support are always helpful.&#160;</p>
<p><b>17.&#160;Agility plus predictability is better than agility alone.&#160;</b></p>
<p>Skills development in practices that support estimation and predictability vs. practices that support agility is not an&#160;<i>either/or&#160;</i>choice. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets.&#160;</p>
<p>If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively. Many businesses value predictability over agility, however, so don't just assume it's one or the other. &#160;</p>
<p>I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. With today's powerful agile practices, especially Scrum, there's no reason we can't have both. &#160;</p>
<p>Overall, #NoEstimates seems like the proverbial solution in search of a problem. I don't see businesses clamoring to get rid of estimates. I see them clamoring to get better estimates. The good news for them is that agile practices, Scrum in particular, can provide excellent support for agility and estimation at the same time.&#160;</p>
<p>My closing thought, in this hash tag-happy discussion, is that #AgileWithEstimationWorksBest -- and #EstimationWithAgileWorksBest too.&#160;</p>
<h2>Resources&#160;</h2>
<ul>
<li>My <a href="https://youtu.be/55tfYRajpFI" title="video response">video response</a> to #NoEstimates&#160;</li>
<li>My <a href="https://youtu.be/FY9X21HA02w?list=PLwg-V1fR_cxvoF8eF5nkE7DrnVN4D7TNb" title="video explanation">video explanation</a> of Estimates, Targets, and Commitments</li>
<li><a href="http://www.stevemcconnell.com/EstimationQuickReference.pdf" title="Quick Reference Summary">Quick Reference Summary</a> of Estimation Techniques</li>
<li>Construx's Professional Development Ladder <a href="https://www.construx.com/uploadedFiles/Construx/Construx_Content/Resources/White_Papers/Construx%20Professional%20Dev%20Ladder.pdf" title="White Paper">White Paper</a></li>
<li><a href="http://www.construx.com/Software_Estimation_In_Depth/" title="Software Estimation in Depth">Software Estimation in Depth</a> Training</li>
<li><a href="http://www.construx.com/Agile_Estimation/" title="Agile Estimation">Agile Estimation</a>&#160;Training</li>
<li><a href="http://www.construx.com/Seminars/Base_Seminar/Agile_Planning_and_Estimation/" title="Agile Planning and Estimation">Agile Planning and Estimation</a>&#160;Training</li>
<li><a href="http://www.construx.com/Scrum_Boot_Camp/" title="Scrum Boot Camp">Scrum Boot Camp</a>&#160;Training</li>
<li><a href="http://www.construx.com/Requirements_Boot_Camp/" title="Requirements Boot Camp">Requirements Boot Camp</a> Training</li>
<li><a href="http://www.construx.com/Agile_Requirements_In_Depth/" title="Agile Requirements in Depth">Agile Requirements in Depth</a>&#160;Training</li>
<li><a href="http://www.construx.com/Seminars/?method=1&amp;topic=&amp;title=" title="Other seminars and training">Other in-person seminars and training</a></li>
<li><a href="https://cxlearn.com" title="Other online training">Other online training</a></li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Understanding_Software_Project_Size_-_New_Lecture_Posted/?blogid=23485">
  <title>Understanding Software Project Size - New Lecture Posted</title>
  <link>https://www.construx.com/10x_Software_Development/Understanding_Software_Project_Size_-_New_Lecture_Posted/?blogid=23485</link>
  <description><![CDATA[I've uploaded a new lecture in my Understanding Software Projects lecture series. This lecture focuses on the critical topic of Software Size. If you've ever wondered why some early projects succeed while later similar projects fail, this lecture explains the]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-08-06T20:36:16Z</dc:date>
  <content:encoded><![CDATA[<p>I've uploaded a new lecture in my Understanding Software Projects lecture series. This lecture focuses on the critical topic of Software Size. If you've ever wondered why some early projects succeed while later similar projects fail, this lecture explains the basic dynamics that cause that. If you've wondered why Scrum projects struggle to scale, I share some insights on that topic.&#160;</p>
<p>I believe this is one of my best lectures in the series so far -- and it's a very important topic. It will be free for the next week, so check it out: <a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>. </p>
<p><span style="color: rgb(51, 51, 51); font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 20px;">Lectures posted so far include: &#160;</span></p>
<p style="margin: 0px 0px 10px; padding: 0px; list-style: none; color: rgb(51, 51, 51); font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 20px;">0.0 Understanding Software Projects - Intro<br style="margin: 0px; padding: 0px; list-style: none;" />&#160; &#160; &#160;0.1 Introduction - My Background<br style="margin: 0px; padding: 0px; list-style: none;" />&#160; &#160; &#160;0.2 Reading the News<br style="margin: 0px; padding: 0px; list-style: none;" />&#160; &#160; &#160;0.3 Definitions and Notations&#160;</p>
<p style="margin: 0px 0px 10px; padding: 0px; list-style: none; color: rgb(51, 51, 51); font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 20px;">1.0 The Software Lifecycle Model - Intro<br style="margin: 0px; padding: 0px; list-style: none;" />&#160; &#160; &#160;1.1 Variations in Iteration&#160;<br style="margin: 0px; padding: 0px; list-style: none;" />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal<br style="margin: 0px; padding: 0px; list-style: none;" /><font color="#ff0000" style="font-weight: bold;">&#160; &#160; &#160;</font>1.3 Lifecycle Model Applied to Common Methodologies&#160;<br style="margin: 0px; padding: 0px; list-style: none;" />&#160; &#160; &#160;1.4 Lifecycle Model - Selecting an Iteration Approach&#160;&#160;</p>
<p style="margin: 0px 0px 10px; padding: 0px; list-style: none; font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 20px;"><b><font color="#ff0000">2.0 Software Size - Introduction (New)</font></b><br /><font color="#333333">&#160; &#160; &#160;1.01 Size - Examples of Size &#160; &#160;&#160;</font><br /><font color="#333333">&#160; &#160; &#160;2.05 Size - Comments on Lines of Code</font><br /><font color="#333333">&#160; &#160; &#160;2.1 Size - Staff Sizes</font><font color="#000080" style="font-weight: bold;">&#160;<br /></font>&#160; &#160; &#160;2.2 Size - Schedule Basics&#160;<br />&#160; &#160; &#160;2.3 Size - Debian Size Claims&#160;</p>
<p style="margin: 0px 0px 10px; padding: 0px; list-style: none; font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 20px;"><font>3.0 Human Variation - Introduction</font></p>
<p style="margin: 0px 0px 10px; padding: 0px; list-style: none; color: rgb(51, 51, 51); font-family: Arial, Helvetica, sans-serif; font-size: 12px; line-height: 20px;">Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/17_Theses_on_Software_Estimation/?blogid=23485">
  <title>17 Theses on Software Estimation</title>
  <link>https://www.construx.com/10x_Software_Development/17_Theses_on_Software_Estimation/?blogid=23485</link>
  <description><![CDATA[(with apologies to Martin Luther)   Arriving late to the #NoEstimates discussion, I’m amazed at some of the assumptions that have gone unchallenged, and I’m also amazed at the absence of some fundamental points that no one seems to]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-08-02T16:20:05Z</dc:date>
  <content:encoded><![CDATA[<p>(with apologies to Martin Luther for the title)</p>
<p>Arriving late to the #NoEstimates discussion, I’m amazed at some of the assumptions that have gone unchallenged, and I’m also amazed at the absence of some fundamental points that no one seems to have made so far. The point of this article is to state unambiguously what I see as the arguments in favor of estimation in software and put #NoEstimates in context. &#160;</p>
<p><b>1.&#160;Estimation is often done badly and ineffectively and in an overly time-consuming way.&#160;</b></p>
<p>My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that “estimation is often done badly” is a true observation of the state of the practice.&#160;</p>
<p><b>2.&#160;The root cause of poor estimation is usually lack of estimation skills.&#160;</b></p>
<p>Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training.&#160;</p>
<p><b>3.&#160;Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge.&#160;</b></p>
<p>I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples</p>
<p>(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of “definition of forecast”.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned.&#160;</p>
<p>(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting.&#160;</p>
<p>(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. That argument is incorrect and shows a lack of basic understanding of the nature of estimation.&#160;</p>
<p>(d) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on the wrong activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse answers than they could be getting. &#160;</p>
<p><b>4.&#160;Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project.&#160;</b></p>
<p>“Estimating is problematic, therefore software professionals should not develop estimation skill” – this is a common line of reasoning in #NoEstimates. Unless a person wants to argue that the need for estimation is rare, this argument is not supported by the rest of #NoEstimate’s premises.&#160;</p>
<p>If I agreed, for sake of argument, that 50% of the projects don’t need to be estimated, the other 50% of the projects would still benefit from the estimators having good estimation skills. If you’re a true software professional, you should develop estimation skill so that you can estimate competently on the 50% of projects that do require estimation.&#160;</p>
<p>In practice, I think the number of projects that need estimates is much higher than 50%.&#160;</p>
<p><b>5.&#160;Estimates serve numerous legitimate, important business purposes.</b></p>
<p>Estimates are used by businesses in numerous ways, including:&#160;</p>
<ul>
<li>Allocating budgets to projects (i.e., estimating the effort and budget of each project)</li>
<li>Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)</li>
<li>Deciding which projects get funded and which do not, which is often based on cost/benefit</li>
<li>Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year</li>
<li>Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget</li>
<li>Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project</li>
<li>Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area</li>
<li>Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)</li>
<li>Making commitments to internal business partners (based on projects’ estimated availability dates)</li>
<li>Making commitments to the marketplace (based on estimated release dates)</li>
<li>Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)</li>
<li>Tracking project progress (comparing actual progress to planned (estimated) progress)</li>
<li>Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)</li>
<li>Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)</li>
</ul>
<p>These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove the ability to use estimates for each of these purposes.</p>
<p>The #NoEstimates response to these business needs is typically of the form, “Estimates are inaccurate and therefore not useful for these purposes” rather than, “The business doesn’t need estimates for these purposes.”&#160;</p>
<p>That argument really just says that businesses are currently operating on much worse quality information than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger.&#160;</p>
<p>This all supports my point that improved estimation skill should be part of the definition of being a true software professional.&#160;</p>
<b>6.&#160;Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates.&#160;</b> <p>One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we’re estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates—the most common confusion I’ve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill.&#160;</p>
<p>Effective estimation requires that the right kind of technique be applied to each different kind of estimate. Learning when to use each technique, as well as learning each technique, requires some professional skills development.&#160;</p>
<b>7.&#160;Estimation and planning are not the same thing, and you can estimate things that you can’t plan.&#160;</b> <p>Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about “how” and estimation is about “how much.”&#160;</p>
<p>Can I “estimate” a chess game, if by “estimate” I mean how each piece will move throughout the game? No, because that isn’t estimation; it’s planning; it’s “how.”</p>
<p>Can I estimate a chess game in the sense of “how much”? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game.&#160;</p>
<p>More to the point, estimating software projects is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved.&#160;</p>
<b>8.&#160;You can estimate what you don’t know, up to a point.&#160;</b> <p>In addition to estimating “how much,” you can also estimate “how uncertain.” In the #NoEstimates discussions, people throw out lots of examples along the lines of, “My project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.” That isn’t really true. What you would end up with in cases like that is high variability in your estimate for Area X, and a common estimation mistake would be letting X’s uncertainty apply to the whole project rather than constraining it’s uncertainty just to Area X.&#160;</p>
<p>Most projects contain a mix of precedented and unprecedented work, or certain and uncertain work. Decomposing the work, estimating uncertainty in different areas, and building up an overall estimate from that is one way of dealing with uncertainty in estimates.&#160;</p>
<b>9.&#160;Both estimation and control are needed to achieve predictability.&#160;</b> <p>Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control.&#160;</p>
<p>To put this in Agile Manifesto-like terms:</p>
<p style="text-align: center;">We have come to value project control over project estimation,&#160;<br />as a means of achieving predictability</p>
<p style="text-align: center;">As in the Agile Manifesto, we value both terms, which means we still value the term on the right.&#160;</p>
<p>#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is a case where I believe the right answer is <i>both/and</i>, not <i>either/or</i>.&#160;</p>
<b>10.&#160;People use the word "estimate" sloppily.&#160;</b> <p>No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word “estimate” to refer to what would more properly be called a “planning target” or “commitment.” Further, one common mistake software professionals make is trying to create estimates when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word “estimate” to ask for that.&#160;</p>
<p>We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively.&#160;</p>
<b>11.&#160;Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills.&#160;</b> <p>A common refrain in Agile development is “It’s impossible to get good requirements,” and that statement has has never been true. I agree that it’s impossible to get <i>perfect </i>requirements, but that isn’t the same thing as getting <i>good </i>requirements. I would agree that “It is impossible to get good requirements if you don’t have very good requirement skills,” and in my experience that is a common case. &#160;I would also agree that “Projects usually don’t have very good requirements,” as an empirical observation—but not as a normative statement that we should accept as inevitable.&#160;</p>
<p>Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both.&#160;</p>
<p>Why would we care about getting good requirements if we’re Agile? Isn’t trying to get good requirements just waterfall? The answer is both yes and no. You can’t achieve good predictability of the combination of cost, schedule, and functionality if you don’t have a good definition of functionality. If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum.&#160;</p>
<b>12.&#160;The typical estimation context involves moderate volatility and a moderate levels of unknowns</b> <p>Ron Jeffries <a href="http://ronjeffries.com/articles/015-jul/mcconnell-2b/" title="writes">writes</a>, “It is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, …, and are therefore capable of being more or less readily estimated by following your favorite book.”&#160;</p>
<p>I don’t know who said that, but it wasn’t me, and I agree with Ron that that statement doesn’t describe most of the projects that I have seen.&#160;</p>
<p>I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature; …; and are therefore amenable to having both effective requirements work and effective estimation work performed on those projects, given sufficient training in both skill sets.”</p>
<p>In other words, software projects are challenging, and they’re even more challenging if you don’t have the skills needed to work on them. If you have developed the right skills, the projects will still be challenging, but you’ll be able to overcome most of the challenges or all of them.&#160;</p>
<p>Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common.&#160;</p>
<b>13.&#160;Responding to change over following a plan does not imply not having a plan.&#160;</b> <p>It’s amazing that in 2015 we’re still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process—typically Scrum—but no plan beyond instantiating Scrum.&#160;</p>
<p>According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. Doing no planning at all is not only inconsistent with the Agile Manifesto, it also wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to <i>respond </i>to change; and that doesn’t imply that you need to avoid committing to plans in the first place.&#160;</p>
<p>My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business up front about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes.&#160;</p>
<p>Instead, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind—which it probably will—you’ll be able to <i>respond to change</i>. Your business will like that even more. Wouldn’t you rather look good twice than look bad once?&#160;</p>
<b>14.&#160;Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability.&#160;</b> <p>Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that predictability comes at the expense of agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well.&#160;</p>
<p>The idea that we have to trade off agility to achieve predictability is a false trade off. In particular, if no one had ever uttered the word “agile,” I would still want to use Scrum because of its support for estimation and predictability.&#160;</p>
<p>The combination of story pointing, product backlog, velocity calculation, short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. Scrum provides better support for estimation than waterfall ever did.&#160;</p>
<p>If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time.&#160;</p>
<b>15.&#160;There are contexts where estimates provide little value.&#160;</b> <p>I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week.&#160;</p>
<p>The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, “Always do the next most useful thing with the resources available.”&#160;</p>
<p>In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever.&#160;</p>
<b>16.&#160;This is not religion. We need to get more technical and economic about software discussions.&#160;</b> <p>I’ve seen #NoEstimates advocates treat these questions of requirements volatility, estimation effectiveness, and supposed tradeoffs between agility and predictability as value-laden moral discussions in which their experience with usually-bad requirements and usually-bad estimates calls for an iterative approach like pure Scrum, rather than a front-loaded approach like Scrum with a pre-populated product backlog. In these discussions, “Waterfall” is used as an invective, where the tone of the argument is often more moral than economic. That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. I’ve appreciated my most recent discussion with Ron Jeffries because he hasn’t done that. It would be better for the industry at large if people could stay more technical and economic more often.&#160;</p>
<p>For my part, software is not religion, and the ratio of work done up front on a software project is not a moral issue. If we assume professional-level skills in agile practices, requirements, and estimation, the decision about how much work to do up front should be an economic decision based on cost of change and value of predictability. If the environment is volatile enough, then it’s a bad economic decision to do lots of up front requirements work just to have a high percentage of requirements spoil before they can be implemented. If there’s little or no business value created by predictability, that also suggests that emphasizing up front estimation work would be a bad economic decision.</p>
<p>On the other hand, if the business does value predictability, then how we support that predictability should also be an economic decision. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, and the business derives value from that, that would be a good economic choice.&#160;</p>
<p>The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt one way. If my team is great at estimation and requirements but poor at Scrum, the economic might tilt the other way.&#160;</p>
<p>Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the question of whether we should invest in developing additional skills as an economic issue too.&#160;</p>
<p>What is the cost of training staff to reach proficiency in estimation and requirements? Does the cost of achieving proficiency exceed the likely benefits that would derive from proficiency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there’s won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case.&#160;</p>
<p>My company and I can train software professionals to become proficient in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project.&#160;</p>
<p>My company and I can also train software professionals to become proficient in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But I wouldn’t recommend stopping with them.&#160;</p>
<p>Skills development in practices that support predictability vs. practices that support agility is not an either/or decision. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets.&#160;</p>
<b>17.&#160;Agility plus predictability is better than agility alone.&#160;</b> <p>If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively.&#160;</p>
<p>In some cases, businesses will value predictability over agility. Odds are that your business actually values both agility and predictability. The point is, ask the business, don’t just assume it’s one or the other.&#160;</p>
<p>I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. We need to get past the either/or thinking that limits us to one set of skills or the other and embrace both/and thinking that leads us to develop the full set of skills needed to become true software professionals.&#160;</p>
<h2>Resources&#160;</h2>
<ul>
<li>My <a href="https://youtu.be/55tfYRajpFI" title="video response">video response</a> to #NoEstimates&#160;</li>
<li><a href="http://www.construx.com/Software_Estimation_In_Depth/" title="Software Estimation in Depth">Software Estimation in Depth</a> Training</li>
<li><a href="http://www.construx.com/Agile_Estimation/" title="Agile Estimation">Agile Estimation</a>&#160;Training</li>
<li><a href="http://www.construx.com/Seminars/Base_Seminar/Agile_Planning_and_Estimation/" title="Agile Planning and Estimation">Agile Planning and Estimation</a>&#160;Training</li>
<li><a href="http://www.construx.com/Scrum_Boot_Camp/" title="Scrum Boot Camp">Scrum Boot Camp</a>&#160;Training</li>
<li><a href="http://www.construx.com/Requirements_Boot_Camp/" title="Requirements Boot Camp">Requirements Boot Camp</a> Training</li>
<li><a href="http://www.construx.com/Agile_Requirements_In_Depth/" title="Agile Requirements in Depth">Agile Requirements in Depth</a>&#160;Training</li>
<li><a href="http://www.construx.com/Seminars/?method=1&amp;topic=&amp;title=" title="Other seminars and training">Other in-person seminars and training</a></li>
<li><a href="https://cxlearn.com" title="Other online training">Other online training</a></li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/_NoEstimates_-_Response_to_Ron_Jeffries/?blogid=23485">
  <title>#NoEstimates - Response to Ron Jeffries</title>
  <link>https://www.construx.com/10x_Software_Development/_NoEstimates_-_Response_to_Ron_Jeffries/?blogid=23485</link>
  <description><![CDATA[Ron Jeffries posted a nice response to my #NoEstimates video. I his response is representative of some of the thinking that is a little more thoughtful in the #NoEstimates space, but still ultimately misses the point of estimation.]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-07-31T18:22:09Z</dc:date>
  <content:encoded><![CDATA[<p>Ron Jeffries posted a <a href="http://ronjeffries.com/articles/015-jul/mcconnell/" title="nice response">thoughtful response</a> to my #NoEstimates <a href="https://www.youtube.com/watch?v=55tfYRajpFI" title="video">video</a>. While I like some elements of his response, it still ultimately glosses over problems with #NoEstimates.&#160;</p>
<p>I'll walk through Ron's critique and show where I think it makes good points vs. where it misses the point.&#160;</p>
<h2>Ron's First Remodel of my Kitchen Remodel Example</h2>
<p>Ron describes a variation on my video's kitchen remodel story. (If Ron is modifying my original story, does that qualify as fan fic??? Cool!) He says the real #NoEstimates way to approach that kind of remodel would be for the contractor to say something like, "Let's divide your remodel up into areas, and we'll allocate $6,000 per area." The customer then says, "I need 15 linear feet of cabinets. What kind of cabinets can I get for $6,000?" Ron characterizes that as a "very answerable question." The contractor then goes through the rest of the project similarly.&#160;</p>
<p>I like the idea of dividing the project into pieces, and I like the idea of budgeting each piece individually. But what makes it possible to break down the project into areas with budget amounts in each area? What makes it possible to know that we can deliver 15 linear feet of cabinets for $6,000? Ron says that question is "very answerable." What makes that question "very answerable?" &#160;</p>
<p>Estimation!&#160;</p>
<p>Specifically, we can answer that question because we have lots of <i>historical data </i>about the cost of kitchen cabinets. As Ron says, "Here are pictures of $30,000 kitchens I've done in the past." That's historical data from completed past projects. As I discuss in my estimation book, historical data is the key to good estimation in general, whether kitchen cabinets or software. In software, Ron's example would called a "reference table of similar completed projects," which is a kind of "estimation by analogy."&#160;</p>
<p>Far from supporting #NoEstimates, the example supports the value of collecting historical data so that you can use it in estimates.&#160;</p>
<h2>Ron's Second Remodel of My Kitchen Remodel Example</h2>
<p>Ron presents a second modification of my scenario, this one based on the observation that kitchens involve physical material and software doesn't. "Kitchens are 'hard', Software is 'soft'."</p>
<p><i>(The whole hard vs. soft argument is a red herring. Yes, there are physical components in a kitchen remodel and there are few if any physical components in most software projects. So that's a difference. But even with the physical components in a kitchen remodel, the cost of labor is a major cost driver, just as it is in software, and more to the point, the labor cost is the primary source of uncertainty and risk in both cases. The presence of uncertainty and risk is the factor that makes estimation interesting in both cases. If there wasn't any uncertainty or risk, we could just look up the correct answer in a reference guide. Estimation would not present any challenges, and we would not need to write blog articles or create hashtags about it. So I think the contexts are more similar than different. Having said that, this issue really is beside the point.)</i></p>
<p>Ron goes on to say that, because software is soft, if the kitchen remodel was a software project we could just build it up $1,000 at a time, always doing the next most useful thing, and always leaving the kitchen in a state in which it can be used each day. The customer can inspect progress each day and give feedback on the direction. As we go along, if we see we don't really need $6,000 for a sink, we can redirect those funds to other areas, or just not spend them at all. If we get to the point where we've spent $20,000 and we're satisfied with where we are, we can can just stop, and we'll have saved $10,000.&#160;</p>
<p>This sounds appealing and probably works in some cases, especially in cases where the people have done the same kind of work many, many times and have a well calibrated gut feel that they can do the whole project satisfactorily for $30,000. However, it also depends on <i>available resources exceeding the resources needed to satisfy requirements</i>. I would love to work in an environment that had excess resources, but my experience says that resources normally fall short of what is needed to satisfy requirements.&#160;</p>
<p>When resources are not sufficient to satisfy the requirements, a less idealized version of Ron's scenario would go more like this:&#160;</p>
<p>The contractor gets to work diligently working and spending $1,000 per day. The kitchen is indeed usable each day, and each day the customer agrees that the kitchen is incrementally better. After reaching the $15,000 mark, however, the customer says, "It doesn't look to me like we're halfway done with the work. I like each piece <i>better </i>than I did before, but we're no where near the end state I wanted." The contractor asks the customer for more detailed feedback and tries to adjust. The daily deliveries continue until the entire $30,000 is gone.&#160;</p>
<p>The kitchen is better, and it is usable, but at the project retrospective the customer says, "None of the major parts are really what I wanted. If I'd known ahead of time that this approach would not get me what I wanted in any category, I would said, Do the appliances and the countertops, and that's all. That way I would at least have been satisfied with <i>something</i>. As it turned out, I'm not satisfied with <i>anything</i>."&#160;</p>
<p>In this case, "collaboration" turned into "going down the drain together," which is not what anyone wanted.&#160;</p>
<p>How do you avoid this outcome? You&#160;<i>estimate&#160;</i>the cost of each of the components. Or you give ranges of estimates and work with the customer to develop budgets for each area. Estimates and budgets help the customer prioritize, which is one of the more common reasons customers want estimates. &#160;</p>
<h2>Ron's Third Example&#160;</h2>
<p>Ron gives a third example in which he built a database product that no one had built before. There are ways to estimate that kind of work (more than you'd think, if you haven't received training in software estimation), but there is going to be more variability in those estimates, and if there are enough unknowns the variability might be high enough to make the estimates worthless. That's a better example of a case in which #NoEstimates might apply.&#160;But even then, I think #AskTheCustomer is a better position than #NoEstimates, or at least better than #AssumeNoEstimates, which is what #NoEstimates is often taken to imply.&#160;</p>
<h2>Summary</h2>
<p>Ron's first example is based on expert estimation using historical data and directly supports #KnowWhenToEstimate. His example actually undermines #NoEstimates.&#160;</p>
<p>Ron's second example assumes resources exceed what is needed to satisfy the requirements. When assumptions are adjusted to the more common condition of scarce resources, Ron's second example also supports the need for estimates.&#160;</p>
<p>Ron closes with encouragement to get better at working within budgets (I agree!), collaborating with customers to identify budgets and similar constraints (I agree!). He also closes with the encouragement to get better at "giving an idea what we can do with that slice, for a slice of the budget"--I agree again, and we can only provide "giving an idea of what we can do with that slice" through estimation!&#160;</p>
<p>None of this should be taken as a knock against decomposing into parts or building incrementally. Estimation by decomposition is a fundamental estimation approach. And I like the incremental emphasis in Ron's examples. It's just that, while building incrementally is good, building incrementally with predictability is even better.&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/_NoEstimates/?blogid=23485">
  <title>#NoEstimates</title>
  <link>https://www.construx.com/10x_Software_Development/_NoEstimates/?blogid=23485</link>
  <description><![CDATA[I've posted a YouTube video that gives my perspective on #NoEstimates.&#160;   This is in the new Construx Brain Casts video series.]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-07-30T21:13:20Z</dc:date>
  <content:encoded><![CDATA[<p>I've posted a <a href="https://www.youtube.com/watch?v=55tfYRajpFI" title="YouTube video">YouTube video</a> that gives my perspective on #NoEstimates.&#160;</p>
<p>This is in the new Construx <a href="https://www.youtube.com/user/ConstruxSoftware" title="Brain Casts">Brain Casts</a> video series.&#160;</p>
<p>&#160; 

<iframe width="560" height="315" src="https://www.youtube.com/embed/55tfYRajpFI?rel=0" frameborder="0"></iframe>

&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Human_Variation_Introduction_-_New_Lecture_Posted/?blogid=23485">
  <title>Human Variation Introduction - New Lecture Posted</title>
  <link>https://www.construx.com/10x_Software_Development/Human_Variation_Introduction_-_New_Lecture_Posted/?blogid=23485</link>
  <description><![CDATA[In this week&#39;s lecture (https   cxlearn.com) I introduce the topic of human variation. I start by describing the general phenomenon of 10x variation. I briefly overview the research on 10x. I describe the problems that 10x variation presents]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-07-21T16:58:06Z</dc:date>
  <content:encoded><![CDATA[<p>In this week's lecture (<a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>) I introduce the topic of human variation. I start by describing the general phenomenon of 10x variation. I briefly overview the research on 10x. I describe the problems that 10x variation presents for research in software engineering. I go into the specific examples of the Chrysler C3 project and the New York Times Chief Programmer Team project. And I summarize a few of the software development issues that are strongly affected by human variation. &#160;</p>
<p>Lectures posted so far include: &#160;</p>
<p>0.0 Understanding Software Projects - Intro<br />&#160; &#160; &#160;0.1 Introduction - My Background<br />&#160; &#160; &#160;0.2 Reading the News<br />&#160; &#160; &#160;0.3 Definitions and Notations&#160;</p>
<p>1.0 The Software Lifecycle Model - Intro<br />&#160; &#160; &#160;1.1 Variations in Iteration&#160;<br />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal<br /><font color="#ff0000" style="font-weight: bold;">&#160; &#160; &#160;</font>1.3 Lifecycle Model Applied to Common Methodologies <br />&#160; &#160; &#160;1.4 Lifecycle Model - Selecting an Iteration Approach&#160;&#160;</p>
<p>2.0 Software Size<br />&#160; &#160; &#160;2.05 Size - Comments on Lines of Code<br />&#160; &#160; &#160;2.1 Size - Staff Sizes<b><font color="#000080">&#160;<br /></font></b>&#160; &#160; &#160;2.2 Size - Schedule Basics&#160;<br />&#160; &#160; &#160;2.3 Size - Debian Size Claims (New)&#160;</p>
<p><b><font color="#ff0000">3.0 Human Variation - Introduction (New)</font></b></p>
<p>Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);"><img src="http://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" style="margin: 0px; padding: 0px; list-style: none; border: 0px; max-width: 100%; height: auto;" /></a></p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Debian_Size_Claims_-_New_Lecture_Posted/?blogid=23485">
  <title>Debian Size Claims - New Lecture Posted</title>
  <link>https://www.construx.com/10x_Software_Development/Debian_Size_Claims_-_New_Lecture_Posted/?blogid=23485</link>
  <description><![CDATA[In this week&#39;s lecture (https   cxlearn.com) I demonstrate how to use some of the size information we&#39;ve discussed in other lectures by diving into the Wikipedia claims about the sizes of various versions of Debian. &#160;The point of]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-06-30T18:17:43Z</dc:date>
  <content:encoded><![CDATA[<p>In this week's lecture (<a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>) I demonstrate how to use some of the size information we've discussed in other lectures by diving into the Wikipedia claims about the sizes of various versions of Debian. &#160;The point of this week's lecture is to show how to apply critical thinking to size information presented by an authoritative source (Wikipedia), and how to arrive at a confident conclusion that that information is not credible. Practicing software professionals should be able to look at size claims like the Debian size claims and, based on general knowledge, immediately think, "That seems far from credible." Yet, few professionals actually do that. My hope is that working through public examples like this in the lecture series will help software professionals improve their instincts and judgment, which can then be applied to projects in their own organizations.&#160;</p>
<p>Lectures posted so far include: &#160;</p>
<p>0.0 Understanding Software Projects - Intro<br />&#160; &#160; &#160;0.1 Introduction - My Background<br />&#160; &#160; &#160;0.2 Reading the News<br />&#160; &#160; &#160;0.3 Definitions and Notations&#160;</p>
<p>1.0 The Software Lifecycle Model - Intro<br />&#160; &#160; &#160;1.1 Variations in Iteration&#160;<br />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal<br /><font color="#ff0000" style="font-weight: bold;">&#160; &#160; &#160;</font>1.3 Lifecycle Model Applied to Common Methodologies <br />&#160; &#160; &#160;1.4 Lifecycle Model - Selecting an Iteration Approach&#160;&#160;</p>
<p>2.0 Software Size<br />&#160; &#160; &#160;2.05 Size - Comments on Lines of Code<br />&#160; &#160; &#160;2.1 Size - Staff Sizes<b><font color="#000080">&#160;<br /></font></b>&#160; &#160; &#160;2.2 Size - Schedule Basics&#160;<br />&#160; &#160; &#160;<b><font color="#ff0000">2.3 Size - Debian Size Claims (New)</font></b></p>
<p>Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);"><img src="http://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" style="margin: 0px; padding: 0px; list-style: none; border: 0px; max-width: 100%; height: auto;" /></a></p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Succeeding_with_Geographically_Distributed_Scrum_Teams_-_New_White_Paper/?blogid=23485">
  <title>Succeeding with Geographically Distributed Scrum Teams - New White Paper</title>
  <link>https://www.construx.com/10x_Software_Development/Succeeding_with_Geographically_Distributed_Scrum_Teams_-_New_White_Paper/?blogid=23485</link>
  <description><![CDATA[We have a new white paper, &quot;Succeeding with Geographically Distributed Scrum Teams.&quot; To quote the white paper itself &#160;   When organizations adopt Agile throughout the enterprise, they typically apply it to both large and small projects. The gap]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-06-30T18:02:38Z</dc:date>
  <content:encoded><![CDATA[<p>We have a new white paper, "Succeeding with Geographically Distributed Scrum Teams." To quote the white paper itself:&#160;</p>
<blockquote><p>When organizations adopt Agile throughout the enterprise, they typically apply it to both large and small projects. The gap is that most Agile methodologies, such as Scrum and XP, are team-level workflow approaches. These approaches can be highly effective at the team level, but they do not address large project architecture, project management, requirements, and project planning needs. Our clients find that succeeding with Scrum on a large, geographically distributed team requires adopting additional practices to ensure the necessary coordination, communication, integration, and architectural work. This white paper discusses common considerations for success with geographically distributed Scrum.</p>
</blockquote>
<a href="http://www.construx.com/Resources/White_Papers/Succeeding_with_Geographically_Distributed_Scrum/" title="Check it out">Check it out</a>!]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Selecting_an_Iteration_Approach_-_New_Lecture_Posted/?blogid=23485">
  <title>Selecting an Iteration Approach - New Lecture Posted</title>
  <link>https://www.construx.com/10x_Software_Development/Selecting_an_Iteration_Approach_-_New_Lecture_Posted/?blogid=23485</link>
  <description><![CDATA[In this week's lecture (https   cxlearn.com) I explain how the lifecycle model can be used to show the incredibly large number of variations in approaches to software projects, especially including numerous variations in kinds of iteration. I identify]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-06-23T10:17:06Z</dc:date>
  <content:encoded><![CDATA[<p>In this week's lecture (<a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>) I explain how the lifecycle model can be used to show the incredibly large number of variations in approaches to software projects, especially including numerous variations in kinds of iteration. I identify approaches that work if you need predictability, if you need flexibility, if you need to attack uncertainty in requirements (i.e., unknown requirements), and if you need to attack uncertainty in architecture (i.e., technical risk). &#160;</p>
<p>The overarching message is that there are lots of different ways to organize the activities on a software project, and the way you organize the activities significantly affects what a project will accomplish.&#160;</p>
<p>Lectures posted so far include: &#160;</p>
<p>0.0 Understanding Software Projects - Intro<br />&#160; &#160; &#160;0.1 Introduction - My Background<br />&#160; &#160; &#160;0.2 Reading the News<br />&#160; &#160; &#160;0.3 Definitions and Notations&#160;</p>
<p>1.0 The Software Lifecycle Model - Intro<br />&#160; &#160; &#160;1.1 Variations in Iteration&#160;<br />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal<br /><font color="#ff0000" style="font-weight: bold;">&#160; &#160; &#160;</font>1.3 Lifecycle Model Applied to Common Methodologies <br />&#160; &#160;<font color="#ff0000"> &#160;<b>1.4 Lifecycle Model - Selecting an Iteration Approach&#160;</b></font></p>
<p>2.0 Software Size<br />&#160; &#160; &#160;2.05 Size - Comments on Lines of Code<br />&#160; &#160; &#160;2.1 Size - Staff Sizes<b><font color="#000080">&#160;<br /></font></b>&#160; &#160; &#160;2.2 Size - Schedule Basics&#160;</p>
<p>Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);"><img src="http://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" style="margin: 0px; padding: 0px; list-style: none; border: 0px; max-width: 100%; height: auto;" /></a></p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/The_Lifecycle_Model_Applied_to_Common_Methodologies_-_New_Lecture_Posted/?blogid=23485">
  <title>The Lifecycle Model Applied to Common Methodologies - New Lecture Posted</title>
  <link>https://www.construx.com/10x_Software_Development/The_Lifecycle_Model_Applied_to_Common_Methodologies_-_New_Lecture_Posted/?blogid=23485</link>
  <description><![CDATA[I&#39;ve posted this week&#39;s lecture in my Understanding Software Projects series at https   cxlearn.com. Some of the past lectures that have been posted are still free.&#160;   In this week&#39;s lecture I explain how methodologies like Scrum,]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-06-17T13:35:41Z</dc:date>
  <content:encoded><![CDATA[<p>I've posted this week's lecture in my Understanding Software Projects series at <a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>. Some of the past lectures that have been posted are still free.&#160;</p>
<p>In this week's lecture I explain how methodologies like Scrum, Extreme Programming, Waterfall, and Code &amp; Fix look from the point of view of the Software Lifecycle Model. The lecture starts by going into more detail about how the Lifecycle Model can be abstracted for purposes of describing various methodologies and then goes into describing specific methodologies. I conclude with some comments about the role that specific practices fit within the Lifecycle Model (pair programming, formal inspections, test first, continuous integration, etc.).&#160;</p>
<p>The point of this is to show the way that familiar methodologies can be abstracted into a general-purpose model. Performing that abstraction helps to separate the substance from the hype in terms of identifying what is really significantly different about a methodology like Scrum or XP vs. what is just marketing fluff.&#160;</p>
<p>Lectures posted so far include: &#160;</p>
<p>0.0 Understanding Sofware Projects - Intro<br />&#160; &#160; &#160;0.1 Introduction - My Background<br />&#160; &#160; &#160;0.2 Reading the News<br />&#160; &#160; &#160;0.3 Definitions and Notations&#160;</p>
<p>1.0 The Software Lifecycle Model - Intro<br />&#160; &#160; &#160;1.1 Variations in Iteration&#160;<br />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal<br /><b><font color="#ff0000">&#160; &#160; &#160;1.3 Lifecycle Model Applied to Common Methodologies (New this Week)</font></b></p>
<p>2.0 Software Size<br />&#160; &#160; &#160;2.05 Size - Comments on Lines of Code<br />&#160; &#160; &#160;2.1 Size - Staff Sizes<b><font color="#000080">&#160;<br /></font></b>&#160; &#160; &#160;2.2 Size - Schedule Basics&#160;</p>
<p>Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);"><img src="http://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" style="margin: 0px; padding: 0px; list-style: none; border: 0px; max-width: 100%; height: auto;" /></a></p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Using_Lines_of_Code_as_a_Software_Size_Measure_-_New_Lecture_Posted/?blogid=23485">
  <title>Using Lines of Code as a Software Size Measure - New Lecture Posted</title>
  <link>https://www.construx.com/10x_Software_Development/Using_Lines_of_Code_as_a_Software_Size_Measure_-_New_Lecture_Posted/?blogid=23485</link>
  <description><![CDATA[I&#39;ve posted this week&#39;s lecture in my Understanding Software Projects series at https   cxlearn.com. Most of the lectures that have been posted are still free. Lectures posted so far include  &#160;   0.0 Understanding Sofware Projects]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-06-02T17:08:17Z</dc:date>
  <content:encoded><![CDATA[<p>I've posted this week's lecture in my Understanding Software Projects series at <a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>. Most of the lectures that have been posted are still free. Lectures posted so far include: &#160;</p>
<p>0.0 Understanding Sofware Projects - Intro<br />&#160; &#160; &#160;0.1 Introduction - My Background<br />&#160; &#160; &#160;0.2 Reading the News</p>
<p>1.0 The Software Lifecycle Model - Intro<br />&#160; &#160; &#160;1.1 Variations in Iteration&#160;<br />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal</p>
<p>2.0 Software Size<br /><font color="#ff0000"><b>&#160; &#160; &#160;2.05 Size - Comments on Lines of Code (New)</b></font><br />&#160; &#160; &#160;2.1 Size - Staff Sizes<b><font color="#000080">&#160;<br /></font></b>&#160; &#160; &#160;2.2 Size - Schedule Basics&#160;</p>
<p>Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);"><img src="http://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" style="margin: 0px; padding: 0px; list-style: none; border: 0px; max-width: 100%; height: auto;" /></a></p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Team_Sizes_and_Schedule_Basics_-_New_Lectures_Posted/?blogid=23485">
  <title>Team Sizes and Schedule Basics - New Lectures Posted</title>
  <link>https://www.construx.com/10x_Software_Development/Team_Sizes_and_Schedule_Basics_-_New_Lectures_Posted/?blogid=23485</link>
  <description><![CDATA[I&#39;ve posted this week&#39;s lecture in my Understanding Software Projects series at https   cxlearn.com. Most of the lectures that have been posted are still free. Lectures posted so far include  &#160;   0.0 Understanding Sofware Projects]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-05-28T18:19:01Z</dc:date>
  <content:encoded><![CDATA[<p>I've posted this week's lecture in my Understanding Software Projects series at <a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>. Most of the lectures that have been posted are still free. Lectures posted so far include: &#160;</p>
<p>0.0 Understanding Sofware Projects - Intro</p>
<p>0.1 Introduction - My Background<br />&#160; &#160; &#160;0.2 Reading the News</p>
<p>1.0 The Software Lifecycle Model - Intro<br />&#160; &#160; &#160;1.1 Variations in Iteration&#160;<br />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal</p>
<p>2.0 Software Size<br />&#160; &#160; &#160;2.1 Size - Staff Sizes<b><font color="#000080">&#160;</font><font color="#ff0000">(New this week)</font></b><br />&#160; &#160; &#160;2.2 Size - Schedule Basics<b><font color="#000080">&#160;</font><font color="#ff0000">(New this week)</font></b></p>
<p>Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);"><img src="http://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" style="margin: 0px; padding: 0px; list-style: none; border: 0px; max-width: 100%; height: auto;" /></a></p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Variations_in_Iteration_-_New_Lecture_Posted/?blogid=23485">
  <title>Variations in Iteration - New Lecture Posted</title>
  <link>https://www.construx.com/10x_Software_Development/Variations_in_Iteration_-_New_Lecture_Posted/?blogid=23485</link>
  <description><![CDATA[I've posted this week's lecture in my Understanding Software Projects series at https   cxlearn.com. Most of the lectures that have been posted are still free. Lectures posted so far include  &#160;   0.0 Understanding Sofware Projects]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-05-19T17:26:54Z</dc:date>
  <content:encoded><![CDATA[<p>I've posted this week's lecture in my Understanding Software Projects series at <a href="https://cxlearn.com" title="https://cxlearn.com">https://cxlearn.com</a>. Most of the lectures that have been posted are still free. Lectures posted so far include: &#160;</p>
<p>0.0 Understanding Sofware Projects - Intro</p>
<p>0.1 Introduction - My Background<br />&#160; &#160; &#160;0.2 Reading the News</p>
<p>1.0 The Software Lifecycle Model - Intro<br /><font color="#000080"><b>&#160; &#160; &#160;1.1 Variations in Iteration (New this week)</b>&#160;</font><br />&#160; &#160; &#160;1.2 Lifecycle Model - Defect Removal</p>
<p>2.0 Software Size</p>
<p>Check out the lectures at&#160;<a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell" style="margin: 0px; padding: 0px; list-style: none; text-decoration: none; color: rgb(61, 122, 191);"><img src="http://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" style="margin: 0px; padding: 0px; list-style: none; border: 0px; max-width: 100%; height: auto;" /></a></p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/New___Understanding_Software_Projects___Lectures_Posted/?blogid=23485">
  <title>New &#39;&#39;Understanding Software Projects&#39;&#39; Lectures Posted</title>
  <link>https://www.construx.com/10x_Software_Development/New___Understanding_Software_Projects___Lectures_Posted/?blogid=23485</link>
  <description><![CDATA[Two new lectures have been posted in my Understanding Software Projects lecture series at http   cxlearn.com. All the lectures that have been posted are still free (though this won't last forever). Lectures posted so far include]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-05-13T17:25:00Z</dc:date>
  <content:encoded><![CDATA[<p>Two new lectures have been posted in my Understanding Software Projects lecture series at <a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com">http://cxlearn.com</a>. All the lectures that have been posted are still free (though this won't last forever). Lectures posted so far include:&#160;</p>
<p>0.0 Understanding Sofware Projects - Intro</p>
<p><b>&#160; &#160; &#160;0.1 Introduction - My Background (new this week)</b></p>
<p>&#160; &#160; &#160;0.2 Reading the News</p>
<p>1.0 The Software Lifecycle Model - Intro</p>
<p>&#160; &#160; &#160;<b>1.1 Lifecycle Model - Defect Removal (new this week)</b></p>
<p>2.0 Software Size</p>
<p>Check out the lectures at <a href="https://cxlearn.com/catalog/22" title="http://cxlearn.com">http://cxlearn.com</a>!</p>
<p><a href="https://cxlearn.com/catalog/22" title="Understanding Software Projects - Steve McConnell"><img src="https://www.construx.com/uploadedImages/SteveMcConnellUnderstandingSoftwareProjects.jpg" alt="Understanding Software Projects - Steve McConnell" title="Understanding Software Projects - Steve McConnell" border="0" /></a></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Understanding_Software_Projects_Lecture_Series/?blogid=23485">
  <title>Understanding Software Projects Lecture Series</title>
  <link>https://www.construx.com/10x_Software_Development/Understanding_Software_Projects_Lecture_Series/?blogid=23485</link>
  <description><![CDATA[Check out my new lecture series, "Understanding Software Projects." In this lecture series, I explain The Four Factors Lifecycle Model and how understanding that model means understanding virtually every significant aspect of software project dynamics.&#160;Current lectures are always free. Check]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2015-05-07T15:24:09Z</dc:date>
  <content:encoded><![CDATA[<p>Check out my new lecture series, "Understanding Software Projects." In this lecture series, I explain The Four Factors Lifecycle Model and how understanding that model means understanding virtually every significant aspect of software project dynamics.&#160;Current lectures are always free. Check it out at&#160;<a href="https://cxlearn.com/catalog/22" target="_blank" rel="nofollow" style="color: rgb(59, 89, 152); cursor: pointer; text-decoration: none; font-family: helvetica, arial, 'lucida grande', sans-serif; font-size: 14px; line-height: 19.3199996948242px;">https://cxlearn.com/catalog/22</a>.</p>
<p>Here's a longer description from the website:</p>
<p><b>Steve McConnell</b> is the author of software industry classics including <i>Code Complete</i>, <i>Rapid Development</i>, and <i>Software Estimation</i>. He has been recognized as one of the three most influential people in the software industry, along with Bill Gates and Linus Torvalds.&#160;</p>
<p>Join Steve for this <b>Groundbreaking Lecture Series</b> that unlocks the secrets of effective software development. These lectures distill hard-won insights from decades of research and experience. They present learnings from Steve's work with hundreds of companies and thousands of projects. Lectures are 10-20 minutes each and are easy to include in your work day.&#160;&#160;</p>
<h2>Lecture Series Focus</h2>
<h2></h2>
<p>In this lecture series, Steve explains The Four Factors Lifecycle Model, and he explains how understanding that model means understanding virtually every significant aspect of software project dynamics. Topics include:&#160;&#160;</p>
<ul>
<li>The role of Size in the Four Factor Model</li>
<li>The role of Uncertainty in the Four Factors Model</li>
<li>The role of Human Variation in the Four Factors Model</li>
<li>The role of Defects in the Four Factors Model&#160;</li>
<li>Numerous case studies that illustrate how to apply the model to gain insights into your software projects&#160;&#160;</li>
</ul>
<h2>Benefits&#160;</h2>
<p>With the deeper understanding of software projects you gain from this lecture series, you will be able to:&#160;&#160;</p>
<ul>
<li>Plan your projects to meet their cost, schedule, quality, and functionality goals</li>
<li>Diagnose and correct your project's problems faster and more confidently</li>
<li>Accelerate the rate of improvement in your organization</li>
<li>Respond appropriately to new developments including new technologies and new software development practices&#160;&#160;</li>
</ul>
<h2>Accessing the Lectures&#160;</h2>
<p>Although the lectures build on each other, they may also be accessed individually. The series is planned to consist of about 50 lectures total. Lectures will be released through 2015 and 2016.&#160;</p>
<p>Steve's most recent lectures will be complimentary at CxLearn.com for the duration of the lecture series. The full set of archived lectures can be accessed for $99; they are also included in Construx eLearning's All Access Pass.&#160;</p>
<p>&#160;</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Scrum_Chickens_and_Pigs/?blogid=23485">
  <title>Scrum Chickens and Pigs</title>
  <link>https://www.construx.com/10x_Software_Development/Scrum_Chickens_and_Pigs/?blogid=23485</link>
  <description><![CDATA[<p>An interesting discussion came up on the Disciplined
 Agile Delivery discussion group on LinkedIn. Scott Ambler asked the
 question, “Is the chicken and pig analogy disrespectful?” The chicken and pig
 analogy is common in scrum. In case you haven’t heard it, it’s based on an old
 joke:
 A chicken and pig are talking about breakfast.
 The chicken says, “How about if we make the farmer a bacon and egg breakfast?”
 The pig says, “That idea doesn’t sound so great to me because, while you would
 be involved in making the breakfast,
 I would be committed.” 
 Scrum teams have used this joke/analogy for
 years to characterize the difference between being on the team vs. being a peripheral
 contributor to the team, as well as to characterize ownership and responsibility
 (i.e., you don’t volunteer someone else to be the pig). 
 The Scrum Guide officially
 discontinued use of this analogy in 2011, but in practice the analogy
 continues to be used up to the present day. 
 For my part, I’ve found the chicken and pig
 joke to be senseless since I first heard it about 25 years ago. When I first
 heard it I thought it seemed senseless because no rational pig would EVER
 choose to participate in an endeavor that resulted in its death. Beyond that
 issue, I think the analogy is ineffective or counterproductive at many levels.
 1. The specific analogy with the animals is
 potentially offensive to people outside the project team. What we call
 ourselves and what we like other people to call us are two different things. We
 might be OK calling ourselves pigs, but it’s a different matter if someone else
 called us pigs. Or when they call us chickens. Why use an analogy that risks
 alienating the very people it’s supposed to inform?
 2. The concept that the analogy attempts to
 communicate is also potentially offensive to people outside the project team.
 Has anyone truly had success telling business partners including product owners
 that they aren’t “committed” to a project? Organizations LOVE commitment.
 Different stakeholders can be “committed” in different ways. In a healthy
 organization the product owners will have their necks on the line – maybe not
 the same way that individual contributors do, but to them it’s the same.
 Likewise the executive sponsor also has their neck on the line, in their own
 way. What good does it do to pick a fight about who’s committed and who isn’t? 
 3. The words “committed” and “involved” don’t
 mean what people who use this joke want those words to mean. The joke tries to
 put specific meanings on “commitment” vs. “involved” as if those are standard
 meanings in English, but they aren’t. When we say two people are “involved”
 with each other that means there’s a significant relationship, possibly even a
 commitment. When we say a problem is an “involved problem,” we mean that it is
 one that requires a level of commitment to solve. If anything, the everyday use
 of the terminology is backwards for business stakeholders. As an executive in
 the project, was I “involved?” No, because I wasn’t working on it day to day.
 Was I “committed?” Absolutely, because I made sure the project got the funding
 and other resources it needed. 
 4. It’s a gross exaggeration. In a scrum
 context, NO ONE is the pig. No one is going to die because of their
 participation in a scrum project. The fact that no one is the pig reinforces
 that there is a continuum of levels of involvement/commitment rather than a
 binary scale.
 5. The intent of the analogy, specific animals
 aside, as I’ve seen it most commonly used, is to create a crude sort of
 us-vs-them thinking. “I’m committed, you’re only involved.” I don’t see this as
 helpful. 
 6. Use of the crude analogy hides a real, more
 nuanced issue, and that is the issue of clearly defining roles and
 responsibilities, and making sure to align accountability and authority. This
 is an issue at all levels in organizations, and applies just as much to other
 people in a business as it does to scrum team members. Non-scrum techniques
 like RACI charts can help with this. 
 7. If people really want to use analogies in
 this area how about:
 (a) “Arranged marriage” (i.e., misalignment of
 authority (arranger) and accountability (arangee)). (I’m joking – I recognize
 there are potential cultural sensitivities on this one.)
 (b) The colloquialism “Let’s you go do that.” As in, "Let's you go fight that really big guy over there."
 (c) “You’re not the boss of me.” Unfortunately,
 I see uses of the chicken and pig analogy that boil down to this meaning. 
 (d) A hunter and a deer are having a
 conversation. The hunter says, “Let’s go deer hunting.” The deer says, “I don’t
 like that idea, because while you would only be involved, I would be
 committed.” (If you like chicken and pigs and don’t like hunter and deer, ask
 yourself what is the substantive difference between those two analogies.)
 (e) Two kids are playing at recess. One says,
 “Let’s play coliseum owner and gladiator. I’ll be the coliseum owner. You be
 the gladiator.” The other kid says, “I don’t want to play that game because
 you’ll just get to stand there collecting ticket money while I have to fight to
 the death.” 
 (f) A wealthy trophy collector recruits a lion
 hunter to hunt a lion as a trophy. The lion hunter says, “You will only be
 involved, but I will be committed to a potentially dangerous endeavor.” The problem
 with this analogy is that the trophy collector can respond, “Yes, but you are representing
 that you have hunted lions before and are skilled in that activity. That is why
 I wanted to hire you in the first place. What is the point of the distinction
 you’re making between involved and committed? Do you want me to pay you to hunt
 a lion or not?” 
 These are all pretty silly, but I have a hard
 time coming up with a good replacement analogy because I don’t think the
 underlying concept the analogy is trying to present is healthy. 
 8. A more fitting analogy that goes a different
 direction might be something like a symphony orchestra. Everyone has their role
 to play, and a satisfactory performance requires everyone doing their part:
 
 Each player has to play their own instrument well. 
 Each player also must cooperate with the other players. 
 The orchestra is led by a conductor with knowledge of the specific strengths and weaknesses of the players. 
 The fundraisers for the symphony must do their jobs, or the orchestra won’t won’t exist at all. 
 Donors must donate to the symphony when the fund raisers call, or the orchestra won’t exist at all. 
 The audience must attend the symphony, or there will be no reason for the orchestra to exist. 
 
 
 
 In short, I agree with the Scrum Guide's decision to stop using the chicken and pig analogy in 2011. It's time to move on to something better</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2014-04-08T16:36:28Z</dc:date>
  <content:encoded><![CDATA[<p>An interesting discussion came up on the <a href="https://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&amp;discussionID=5858203978063441920&amp;gid=4685263&amp;commentID=5859374880595673088&amp;trk=view_disc&amp;fromEmail=&amp;ut=2q1I8L4Hndhmc1">Disciplined
Agile Delivery</a> discussion group on LinkedIn. Scott Ambler asked the
question, "Is the chicken and pig analogy disrespectful?" The chicken and pig
analogy is common in scrum. In case you haven’t heard it, it’s based on an old
joke:</p>
<p>A chicken and pig are talking about breakfast.
The chicken says, "How about if we make the farmer a bacon and egg breakfast?"
The pig says, "That idea doesn’t sound so great to me because, while you would
be <i>involved</i> in making the breakfast,
I would be <i>committed</i>." </p>
<p>Scrum teams have used this joke/analogy for
years to characterize the difference between being on the team vs. being a peripheral
contributor to the team, as well as to characterize ownership and responsibility
(i.e., you don’t volunteer someone else to be the pig). </p>
<p>The Scrum Guide <a href="https://www.scrum.org/About/All-Articles/articleType/ArticleView/articleId/90/Chickens-and-Pigs">officially
discontinued</a> use of this analogy in 2011, but in practice the analogy
continues to be used up to the present day. </p>
<p>For my part, I’ve found the chicken and pig
joke to be senseless since I first heard it pre-Scrum about 25 years ago. When I first
heard it I thought it seemed senseless because no rational pig would EVER
choose to participate in an endeavor that resulted in its death. Beyond that
issue, I think the analogy is ineffective or counterproductive at many levels.</p>
<p>1. The specific analogy with the animals is
potentially offensive to people outside the project team. What we call
ourselves and what we like other people to call us are two different things. We
might be OK calling ourselves pigs, but it’s a different matter if someone else
called us pigs. Or when they call us chickens. Why use an analogy that risks
alienating the very people it’s supposed to inform?</p>
<p>2. The concept that the analogy attempts to
communicate is also potentially offensive to people outside the project team.
Has anyone truly had success telling business partners including product owners
that they aren’t "committed" to a project? Organizations LOVE commitment.
Different stakeholders can be "committed" in different ways. In a healthy
organization the product owners will have their necks on the line -- maybe not
the same way that individual contributors do, but to them it’s the same.
Likewise the executive sponsor also has their neck on the line, in their own
way. What good does it do to pick a fight about who’s committed and who isn’t? </p>
<p>3. The words "committed" and "involved" don’t
mean what people who use this joke want those words to mean. The joke tries to
put specific meanings on "commitment" vs. "involved" as if those are standard
meanings in English, but they aren’t. When we say two people are "involved"
with each other that means there’s a significant relationship, possibly even a
commitment. When we say a problem is an "involved problem," we mean that it is
one that requires a level of commitment to solve. If anything, the everyday use
of the terminology is backwards for business stakeholders. As an executive in
the project, was I "involved?" No, because I wasn’t working on it day to day.
Was I "committed?" Absolutely, because I made sure the project got the funding
and other resources it needed. </p>
<p>4. It’s a gross exaggeration. In a scrum
context, NO ONE is the pig. No one is going to die because of their
participation in a scrum project. The fact that no one is the pig reinforces
that there is a continuum of levels of involvement/commitment rather than a
binary scale.</p>
<p>5. The intent of the analogy, specific animals
aside, as I’ve seen it most commonly used, is to create a crude sort of
us-vs-them thinking. "I’m committed, you’re only involved." I don’t see this as
helpful. </p>
<p>6. Use of the crude analogy hides a real, more
nuanced issue, and that is the issue of clearly defining roles and
responsibilities, and making sure to align accountability and authority. This
is an issue at all levels in organizations, and applies just as much to other
people in a business as it does to scrum team members. Non-scrum techniques
like RACI charts can help with this. </p>
<p>7. If people really want to use analogies in
this area how about:</p>
<p>(a) "Arranged marriage" (i.e., misalignment of
authority (arranger) and accountability (arangee)). (I’m joking -- I recognize
there are potential cultural sensitivities on this one.)</p>
<p>(b) The colloquialism "Let’s you go do that." As in, "Let's you go fight that really big guy over there."</p>
<p>(c) "You’re not the boss of me." Unfortunately,
I see uses of the chicken and pig analogy that boil down to this meaning. </p>
<p>(d) A hunter and a deer are having a
conversation. The hunter says, "Let’s go deer hunting." The deer says, "I don’t
like that idea, because while you would only be involved, I would be
committed." (If you like chicken and pigs and don’t like hunter and deer, ask
yourself what is the substantive difference between those two analogies.)</p>
<p>(e) Two kids are playing at recess. One says,
"Let’s play coliseum owner and gladiator. I’ll be the coliseum owner. You be
the gladiator." The other kid says, "I don’t want to play that game because
you’ll just get to stand there collecting ticket money while I have to fight to
the death." </p>
<p>(f) A wealthy trophy collector recruits a lion
hunter to hunt a lion as a trophy. The lion hunter says, "You will only be
involved, but I will be committed to a potentially dangerous endeavor." The problem
with this analogy is that the trophy collector can respond, "Yes, but you are representing
that you have hunted lions before and are skilled in that activity. That is why
I wanted to hire you in the first place. What is the point of the distinction
you’re making between involved and committed? Do you want me to pay you to hunt
a lion or not?" </p>
<p>These are all pretty silly, but I have a hard
time coming up with a good replacement analogy because I don’t think the
underlying concept the analogy is trying to present is healthy. </p>
<p>8. A more fitting analogy that goes a different
direction might be something like a symphony orchestra. Everyone has their role
to play, and a satisfactory performance requires everyone doing their part:</p>
<ul>
<li>Each player has to play their own instrument well. </li>
<li>Each player also must cooperate with the other players. </li>
<li>The orchestra is led by a conductor with knowledge of the specific strengths and weaknesses of the players. </li>
<li>The fundraisers for the symphony must do their jobs, or the orchestra won’t won’t exist at all. </li>
<li>Donors must donate to the symphony when the fund raisers call, or the orchestra won’t exist at all. </li>
<li>The audience must attend the symphony, or there will be no reason for the orchestra to exist. </li>
</ul>
<p>

In short, I agree with the Scrum Guide's decision to stop using the chicken and pig analogy in 2011. It's time to move on to a more productive story. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Scrum_Trainer_/_Senior_Fellow_Position_Available/?blogid=23485">
  <title>Scrum Trainer / Senior Fellow Position Available</title>
  <link>https://www.construx.com/10x_Software_Development/Scrum_Trainer_/_Senior_Fellow_Position_Available/?blogid=23485</link>
  <description><![CDATA[Scrum Trainer / Senior Fellow position available.]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2013-04-17T18:38:15Z</dc:date>
  <content:encoded><![CDATA[<p>If you're a highly qualified Scrum Professional, check out <a title="our opening" href="http://construx.jobinfo.com/public/description.php?jid=9907308&amp;rcid=182563">our opening</a> for a Scrum Trainer / Senior Fellow. Here is a brief description (follow <a title="the link" href="http://construx.jobinfo.com/public/description.php?jid=9907308&amp;rcid=182563">the link</a> for more details): </p>
<h2>Travel the World, Help Teams Adopt Scrum, and Reach Their Full Potential</h2>
<p>Share your hard-won lessons learned with others. Work with a staff of world-class software experts including Steve McConnell, author of <em>Code Complete </em>and other software industry classics. Become a part of Construx Software, a company recognized multiple times as being the best small company to work for in Washington state.  </p>
<h2>Requirements for Scrum Trainer/Consultant</h2>
<p>We are looking for candidates who have: </p>
<ul>
<li>A minimum of 10 years of broad and deep experience in software development, including deep subject matter expertise in Scrum.</li>
<li>Broad and deep knowledge of current software development in-the-trenches practice, research, and literature. </li>
<li>Excellent verbal communication skills including the ability to present to groups of professionals.</li>
<li>“Leadership” level understanding of at least two of the following areas: Agile Development, Software Project Management, Software Requirements, Software Process, Software Maintenance, Software Design, Software Construction, Software Test, Software Quality, Software Configuration Management, and Software Tools and Methods.</li>
<li>The ability to work both independently and as part of a collaborative team.</li>
<li>Willingness to commit to providing excellent service quality. </li>
<li>Willingness to spend approximately 50% of your time traveling to client locations in North America, with occasional international trips.</li>
<li>An ongoing personal commitment to learning from clients, co-workers, publications, and other sources. </li>
</ul>
<p>Preferred but not required: </p>
<ul>
<li>Training experience and/or public speaking experience.</li>
<li>A four-year degree from an accredited university.</li>
<li>Industry certifications including Certified Scrum Trainer, Certified Scrum Coach, Certified Scrum Practitioner, Certified Scrum Master, and Professional Scrum Master.</li>
<li>A record of conference presentations.</li>
<li>A record of published work in refereed journals, blogs, and/or popular trade publications.</li>
</ul>
<h2>No Training Experience?</h2>
<p>Our primary interest is your <strong>depth of technical expertise</strong>. If you are technically qualified, Construx will provide deep support for developing your training and presentation skills. </p>
<h2>Why Construx?</h2>
<p>Construx Software is an established industry leader in software development best practices, providing consulting and training services to leading companies worldwide. Construx management has created an environment that empowers employees to perform at their highest levels while maintaining a healthy work-life balance. Low turnover, consistent profitability, and an exceptional work force are reasons this company has been named the best small company to work for in Washington state multiple times. Steve McConnell, Construx CEO, said, of his thoughts upon founding the company 17 years ago, "I wanted to create a company that I personally would want to work in the rest of my career." </p>
<p>For more details, and to contact us or apply for the position, please visit <a title="Construx Software Scrum Trainer Position" href="http://construx.jobinfo.com/description.php?jid=9907308&amp;refid=15"><font color="#3d7abf">here</font></a>. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/2013_ECSE_Discussion_Topics_Posted/?blogid=23485">
  <title>2013 ECSE Discussion Topics Posted</title>
  <link>https://www.construx.com/10x_Software_Development/2013_ECSE_Discussion_Topics_Posted/?blogid=23485</link>
  <description><![CDATA[<p>I host an executive discussion group in the Seattle area called the Executive Council for Software Excellence (ECSE). We meet monthly at our offices in Bellevue, usually on the second Monday of each month. The group focuses on enterprise-level software development issues. This is a great opportunity to network and compare challenges and solutions with other executives who, we have found, tend to be wrestling with the same issues you are. 
 To keep the discussions focused, the group membership is comprised of local executives who oversee technical staffs of 50 or more with titles like VP or Director.  The number "50" is not a hard number. The most important participation criteria is having a multiple-project span of control. 
 Here's our calendar for the rest of this year:
 
 
 
 April 8
 
 Beating the Iron Triangle:
 Succeeding When Cost, Schedule, and Functionality are all Fixed
 
 
 
 May 13
 
 Strategies for Project
 Portfolio Management
 
 
 
 June 10
 
 Estimation: Merging Waterfall
 and Agile Development
 
 
 
 July 8
 
 Supporting Innovation
 
 
 
 August
 
 Summer Break
 
 
 
 September 9
 
 Leading Change Initiatives
 
 
 
 October 14
 
 Lessons Learned in Agile
 Development
 
 
 
 November 4
 
 Major Levers for Productivity
 Improvement
 
 
 
 December 9
 
 Cloud Development Best
 Practices
 
 
 
 
 You can see a little more information about the group on our website at http://www.construx.com/ecse/. 
 If you're in the Bellevue/Seattle area and you're interested in participating, please let me know. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2013-04-01T20:00:18Z</dc:date>
  <content:encoded><![CDATA[<p>I host an executive discussion group in the Seattle area called the Executive Council for Software Excellence (ECSE). We meet monthly at our offices in Bellevue, usually on the second Monday of each month. The group focuses on enterprise-level software development issues. This is a great opportunity to network and compare challenges and solutions with other executives who, we have found, tend to be wrestling with the same issues you are. </p>
<p>To keep the discussions focused, the group membership is comprised of local executives who oversee technical staffs of 50 or more with titles like VP or Director. The number "50" is not a hard number. The most important participation criteria is having a multiple-project span of control. </p>
<p>Here's our calendar for the rest of this year:</p>
<table>
<tbody>
<tr>
<td style="width: 100px;" abbr="null" axis="null"><p>April 8</p>
</td>
<td><p>Beating the Iron Triangle:
  Succeeding When Cost, Schedule, and Functionality are all Fixed</p>
</td>
</tr>
<tr>
<td valign="top"><p>May 13</p>
</td>
<td><p>Strategies for Project Portfolio Management</p>
</td>
</tr>
<tr>
<td valign="top"><p>June 10</p>
</td>
<td><p>Estimation: Merging Waterfall and Agile Development</p>
</td>
</tr>
<tr>
<td><p>July 8</p>
</td>
<td><p>Supporting Innovation</p>
</td>
</tr>
<tr>
<td><p>August</p>
</td>
<td><p><i>Summer Break</i></p>
</td>
</tr>
<tr>
<td><p>September 9</p>
</td>
<td><p>Leading Change Initiatives</p>
</td>
</tr>
<tr>
<td><p>October 14</p>
</td>
<td><p>Lessons Learned in Agile
  Development</p>
</td>
</tr>
<tr>
<td><p>November 4</p>
</td>
<td><p>Major Levers for Productivity
  Improvement</p>
</td>
</tr>
<tr>
<td><p>December 9</p>
</td>
<td><p>Cloud Development Best
  Practices</p>
</td>
</tr>
</tbody>
</table>
<p>You can see a little more information about the group on our website at <a href="http://www.construx.com/ecse/">http://www.construx.com/ecse/</a>. </p>
<p>If you're in the Bellevue/Seattle area and you're interested in participating, please <a title="Interest in ECSE Meetings" href="mailto:stevemcc@construx.com?subject=Interest in ECSE Meetings">let me know</a>. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Software_Project_Archaeology/?blogid=23485">
  <title>Software Project Archaeology</title>
  <link>https://www.construx.com/10x_Software_Development/Software_Project_Archaeology/?blogid=23485</link>
  <description><![CDATA[<p>A colleague asked me the following question: </p>
<blockquote><p>Assume you were asked to assess a software development team from outside of the organization (that might occur as due diligence or some other context), and you had full access to all internal artifacts of the organization, but you were not allowed to talk directly with anyone from inside. To what degree could you evaluate the quality and effectiveness of the software team just from reviewing <em>just their work</em>, without knowing anything else about them? </p>
</blockquote>
<p>This is a wonderful question, </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2013-03-13T18:53:44Z</dc:date>
  <content:encoded><![CDATA[<p>A colleague asked me the following question: </p>
<blockquote><p>Assume you were asked to assess a software development team from outside of the organization (that might occur as due diligence or some other context), and you had full access to all internal artifacts of the organization, but you were not allowed to talk directly with anyone from inside. To what degree could you evaluate the quality and effectiveness of the software team just from reviewing <em>just their work</em>, without knowing anything else about them? </p>
</blockquote>
<p>This is a wonderful question, and it isn't just theoretical. We do consulting engagements in which we review project artifacts before we talk to team members, and we use those reviews to target the questions we will ask when we do in-person interviews. When we look at "artifacts" we look at code, test cases, documents, drawings, post it notes, emails, wiki pages, graphs, database contents, digital whiteboard photos -- basically any repository for project data. </p>
<p>We look at the following kinds of questions: 

</p>
<p><strong>What artifacts exist, and what is their scope? </strong>Does the project have artifacts that at least attempt to cover all project activities including requirements, design, construction standards, code documentation, general planning, test planning, defect reporting, etc.? If artifacts are not comprehensive, is there any logic behind what is covered and what isn't? </p>
<p><strong>What is the depth of coverage of the artifacts?</strong> Do the artifacts try to document every detail, or are they more general? 

Is the level of detail appropriate to the kind of work the company does? </p>
<p><strong>Are the artifacts substantative? </strong>We often see artifacts that are so generic that they are useless to the project. Sometimes we see unmodified boilerplate presented as project documentation. Related: does it appear that the people creating the artifacts understand why they are creating the artifacts, or does it look more like they’re “going through the motions” without understanding why they’re doing what they’re doing. </p>
<p><strong>What is the quality of the work in the artifacts? </strong>For example, are requirements statements well formed? Is there evidence that customers have been involved in formulating requirements? Is there evidence that work is getting reviewed? Do the plans look realistic and achievable? Does the design go beyond just drawing boxes and lines and appear to contain some thought?

</p>
<p><strong>How long does it take the organzation to produce the artifacts?</strong> It isn’t unheard of for organizations to generate artifacts for the first time when they receive our request to show us their work. These organizations know at some level that they <em>should</em> be creating certain artifacts, but they haven’t been. </p>
<p><strong>How recently have the artifacts been updated? </strong>This gives one indication of whether the artifacts are actually being used.
We assume that if no artifacts have been updated for the past 6 months, they are most likely being ignored (or were never relevant in the first place). </p>
<p><strong>What evidence do we see that the artifacts are being used? </strong>I.e., is the team creating “write-only” documentation that isn’t really serving any useful purpose on the project, or are the artifacts being used? </p>
<p><strong>Are the artifacts readily accessible to the project team via a revision control system, wiki pages, or some other means?</strong> If team members don't have ready access to materials, that calls into question the degree to which they can actually be using the materials. </p>
<p>We've worked with so many different companies in so many different industries that we no longer have many preconceived notions of what specific artifacts need to look like. We've seen good organizations with minimal documentation, and we've seen bad organizations with extensive documentation. What we are looking for is, Do the artifacts, considered as a set, show us a project that is being run in an organized, deliberate way--paying attention, and learning from its experience? Or do the artifacts show a project that is chaotic, constantly in crisis mode, and mostly working reactively rather than proactively? </p>
<p>When we do assessments with organizations, occasionally we're surprised to find an organization that is more effective than we would have thought based on our document reviews, but that's the exception, and usually we can draw numerous valid conclusions just by doing software archaeology. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/New_White_Papers_Now_Available/?blogid=23485">
  <title>New White Papers Now Available</title>
  <link>https://www.construx.com/10x_Software_Development/New_White_Papers_Now_Available/?blogid=23485</link>
  <description><![CDATA[<p>We've recently posted more new white papers on our website. These are free to members (and membership is free). 
 5 Things Every Software Executive Should Know About Scrum 

 
 
 The success (or failure) of Scrum is all in how’s it’s adopted. This white paper explores five key things software executives should understand when considering a Scrum adoption. It summarizes what Scrum can and cannot do and provides advice to software executives on how they can support the adoption of Scrum. 
 
 
 Bridging the Product Introduction Gap
 

  New software and hardware technologies are driving product innovation at an unprecedented rate. Companies that thrive in this new era will adopt practices that foster product management and product development collaboration to blend new technology alternatives with sound market insight. 
 
 
 
 
 
 Early Requirements Prioritization
 

 Organizations can recognize significant schedule and cost savings by making early decisions about what will and will not be delivered by a project or program. This paper outlines a technique for early analysis of the potential cost to develop a feature and the potential return on investment. This preliminary information helps organizations make early business decision about feature set priorities. 
 
 
 Introducing Agility into a Phase Gate Process
 

  Phase gate processes are common in mature software development organizations that want to support continuous evaluation of products and projects to ensure that it makes sense for the business to continue its investment in them. Some organizations want to introduce additional agility into their phase gate process while maintaining their oversight and governance. This white paper outlines the major considerations and keys to success to introducing agility into a well defined phase gate process.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2012-12-21T17:29:03Z</dc:date>
  <content:encoded><![CDATA[<p>We've recently posted more new white papers on our website. These are free to members (and membership is free). </p>
<h3>5 Things Every Software Executive Should Know About Scrum </h3>
<p>

The success (or failure) of Scrum is all in how’s it’s adopted. This white paper explores five key things software executives should understand when considering a Scrum adoption. It summarizes what Scrum can and cannot do and provides advice to software executives on how they can support the adoption of Scrum. 

</p>
<h3>Bridging the Product Introduction Gap
</h3>
<p> New software and hardware technologies are driving product innovation at an unprecedented rate. Companies that thrive in this new era will adopt practices that foster product management and product development collaboration to blend new technology alternatives with sound market insight. </p>
<h3>




Early Requirements Prioritization
 </h3>
<p>Organizations can recognize significant schedule and cost savings by making early decisions about what will and will not be delivered by a project or program. This paper outlines a technique for early analysis of the potential cost to develop a feature and the potential return on investment. This preliminary information helps organizations make early business decision about feature set priorities. 

</p>
<h3>Introducing Agility into a Phase Gate Process
</h3>
<p> Phase gate processes are common in mature software development organizations that want to support continuous evaluation of products and projects to ensure that it makes sense for the business to continue its investment in them. Some organizations want to introduce additional agility into their phase gate process while maintaining their oversight and governance. This white paper outlines the major considerations and keys to success to introducing agility into a well defined phase gate process.</p>
<p>Check out all of our white papers <a title="Construx Software Development White Papers" href="http://www.construx.com/resourcelanding/?tax=135">here</a>. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Construx_Executive_Summit_2012/?blogid=23485">
  <title>Construx Executive Summit 2012</title>
  <link>https://www.construx.com/10x_Software_Development/Construx_Executive_Summit_2012/?blogid=23485</link>
  <description><![CDATA[<p> <img width="560" height="199" src="http://www.construx.com/uploadedimages/SummitHeader-Small.jpg" /></p>
<p>A rare opportunity for top software executives to compare software development challenges and solutions with a highly select group of executive peers, hosted by Steve McConnell with software thought leaders Mike Cohn, Stuart Crabb, David Anderson, Karl Wiegers, John Clifford, and others. <a href="http://www.construx.com/Summit_Registration/"><strong>Space is Limited--Register Now!</strong></a></p>
<span>The Event</span><p>The Construx Software Executive Summit provides a forum for top executives to compare, evaluate, and improve their Software Development experiences and strategies at the enterprise level. Through <strong><a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886">motivating keynote addresses</a></strong> and <a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887"><strong>insightful small group discussions</strong></a>, participants will develop new insights into their own software organizations and will explore their challenges and opportunities with executive peers.</p>
<div class="grayBox"><p>There is no other event that will give you this level of interaction with other technology execs in a very productive environment. Just a very good group of people who were very respectful of one another, very smart, insightful, and fun." -- Rick Logue, Vice President, Information Technology, ADP</p>
<span><a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">see more &gt;</a></span></div>
<span>The Keynotes</span><p><strong>The 2012 Summit will feature the following keynote addresses:</strong></p>
<ul>
<li>Steve McConnell, Software Estimation in an Agile World</li>
<li>Mike Cohn, GASPing Toward the Future: What's in Store for Scrum</li>
<li>Stuart Crabb, Facebook's Approach to Building and Managing the Next Generation Workforce</li>
<li>David Anderson, Delivering Better Predictability, Business Agility, and Good Governance with Kanban</li>
<li>Karl Wiegers, Cosmic Truths about Software Requirements</li>
<li>John Clifford, Software Dogfighting: A Strategic Guide to the Agile Organization   <a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886"><strong>more &gt;</strong></a></li>
</ul>
<span>The Discussions</span><p>Year after year, Summit attendees report that peer discussions are the most valuable part of the Summit. This year's discussion topics include:</p>
<ul>
<li>Accelerating Organizational Change</li>
<li>Successful Leadership in Software Development</li>
<li>Beyond Technical Debt: Quality Practices</li>
<li>Succeeding with Underresourced Teams</li>
<li>Scaling Scrum</li>
<li>Creating Effective Organizational Structures</li>
<li>Managing "Core" Development (aka "shared services" or "foundations")</li>
<li>Improving Productivity   <a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887"><strong>more &gt;</strong></a></li>
</ul>
<p>Summit participants are top technical executives from companies slated to include Amazon.com, Disney, eBay, Electronic Arts, Facebook, Johnson &amp; Johnson, Intel, Microsoft, Nordstrom, Shell, Tivo, and many other top technology-intensive companies.<a href="http://www.construx.com/Summit_Registration/"> <strong>Reserve your spot today</strong></a></p>
<p><strong>The Invaluable Takeaways. </strong>For the past five years, 98.7% of attendees said they would attend again within two years, and <em>100% said they would recommend the event to others </em>(<a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841"><strong>see comments</strong></a>).</p>
<span>Benefits of Attending </span><p>Attendees will share and compare experiences with other software executives focused on software-development issues. You will discuss challenges in-depth via peer-to-peer discussions. You can explore issues with industry thought leaders including Steve McConnell, Mike Cohn, Stuart Crabb, David Anderson, Karl Wiegers, John Clifford, and other Summit attendees. And you will have the additional opportunity to participate in monthly dial-in meetings for two years following the Summit. <strong><a href="http://www.construx.com/Summit_Registration/">Register Now!</a></strong></p>
<div class="grayBox"><p>"Most valuable: All of it. Peers, speakers, topics, very good. Money well spent."<br />Bruce Kenny, EVP, Technology &amp; Hosted Ops, Webtrends, Inc.</p>
<span><a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">see more &gt;</a></span></div>
<span>Who Should Attend</span><p><strong>At past Summits, 95% of participants have held titles of VP, CTO, Director, or higher.</strong><strong></strong> All participants should have multi-project responsibility for software development at the organization or enterprise level. In most organizations, executives at this level will have staffs of 50-100 or more. (In smaller organizations the total staff can be slightly smaller.) Attendees will be assigned to discussion groups based on profiles submitted prior to the Summit. Construx reserves the right to limit participation in the Summit to participants who meet the participation criteria of multi-project oversight in a C-level, VP, or Director role or equivalent.</p>
<p>The Summit will be held in downtown Seattle, November 12-14, 2012. Participation fee is $3500. Reservations will be accepted on a first-come, first-served basis. <a href="http://www.construx.com/Summit_Registration/"><strong>Reserve your spot today</strong></a>.</p>
<p><em>* This offer may not be combined with any other offer including public seminar early bird registration discounts.</em></p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2012-06-20T13:22:00Z</dc:date>
  <content:encoded><![CDATA[<p> <img width="560" height="199" src="http://www.construx.com/uploadedimages/SummitHeader-Small.jpg" /></p>
<p>A rare opportunity for top software executives to compare software development challenges and solutions with a highly select group of executive peers, hosted by Steve McConnell with software thought leaders Mike Cohn, Stuart Crabb, David Anderson, Karl Wiegers, John Clifford, and others. <a href="http://www.construx.com/Summit_Registration/"><strong>Space is Limited--Register Now!</strong></a></p>
<span>The Event</span><p>The Construx Software Executive Summit provides a forum for top executives to compare, evaluate, and improve their Software Development experiences and strategies at the enterprise level. Through <strong><a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886">motivating keynote addresses</a></strong> and <a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887"><strong>insightful small group discussions</strong></a>, participants will develop new insights into their own software organizations and will explore their challenges and opportunities with executive peers.</p>
<div class="grayBox"><p>There is no other event that will give you this level of interaction with other technology execs in a very productive environment. Just a very good group of people who were very respectful of one another, very smart, insightful, and fun." -- Rick Logue, Vice President, Information Technology, ADP</p>
<span><a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">see more &gt;</a></span></div>
<span>The Keynotes</span><p><strong>The 2012 Summit will feature the following keynote addresses:</strong></p>
<ul>
<li>Steve McConnell, Software Estimation in an Agile World</li>
<li>Mike Cohn, GASPing Toward the Future: What's in Store for Scrum</li>
<li>Stuart Crabb, Facebook's Approach to Building and Managing the Next Generation Workforce</li>
<li>David Anderson, Delivering Better Predictability, Business Agility, and Good Governance with Kanban</li>
<li>Karl Wiegers, Cosmic Truths about Software Requirements</li>
<li>John Clifford, Software Dogfighting: A Strategic Guide to the Agile Organization   <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886"><strong>more &gt;</strong></a></li>
</ul>
<span>The Discussions</span><p>Year after year, Summit attendees report that peer discussions are the most valuable part of the Summit. This year's discussion topics include:</p>
<ul>
<li>Accelerating Organizational Change</li>
<li>Successful Leadership in Software Development</li>
<li>Beyond Technical Debt: Quality Practices</li>
<li>Succeeding with Underresourced Teams</li>
<li>Scaling Scrum</li>
<li>Creating Effective Organizational Structures</li>
<li>Managing "Core" Development (aka "shared services" or "foundations")</li>
<li>Improving Productivity   <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887"><strong>more &gt;</strong></a></li>
</ul>
<p>Summit participants are top technical executives from companies slated to include Amazon.com, Disney, eBay, Electronic Arts, Facebook, Johnson &amp; Johnson, Intel, Microsoft, Nordstrom, Shell, Tivo, and many other top technology-intensive companies.<a href="https://www.construx.com/Summit_Registration/"> <strong>Reserve your spot today</strong></a></p>
<p><strong>The Invaluable Takeaways. </strong>For the past five years, 98.7% of attendees said they would attend again within two years, and <em>100% said they would recommend the event to others </em>(<a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841"><strong>see comments</strong></a>).</p>
<span>Benefits of Attending </span><p>Attendees will share and compare experiences with other software executives focused on software-development issues. You will discuss challenges in-depth via peer-to-peer discussions. You can explore issues with industry thought leaders including Steve McConnell, Mike Cohn, Stuart Crabb, David Anderson, Karl Wiegers, John Clifford, and other Summit attendees. And you will have the additional opportunity to participate in monthly dial-in meetings for two years following the Summit. <strong><a href="http://www.construx.com/Summit_Registration/">Register Now!</a></strong></p>
<div class="grayBox"><p>"Most valuable: All of it. Peers, speakers, topics, very good. Money well spent."<br />Bruce Kenny, EVP, Technology &amp; Hosted Ops, Webtrends, Inc.</p>
<span><a href="http://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">see more &gt;</a></span></div>
<span>Who Should Attend</span><p><strong>At past Summits, 95% of participants have held titles of VP, CTO, Director, or higher.</strong><strong></strong> All participants should have multi-project responsibility for software development at the organization or enterprise level. In most organizations, executives at this level will have staffs of 50-100 or more. (In smaller organizations the total staff can be slightly smaller.) Attendees will be assigned to discussion groups based on profiles submitted prior to the Summit. Construx reserves the right to limit participation in the Summit to participants who meet the participation criteria of multi-project oversight in a C-level, VP, or Director role or equivalent.</p>
<p>The Summit will be held in downtown Seattle, November 12-14, 2012. Participation fee is $3500. Reservations will be accepted on a first-come, first-served basis. <a href="http://www.construx.com/Summit_Registration/"><strong>Reserve your spot today</strong></a>.</p>
<p><em>* This offer may not be combined with any other offer including public seminar early bird registration discounts.</em></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Technical_Debt_Webinar_Archive_Version_Now_Available/?blogid=23485">
  <title>Technical Debt Webinar–Archive Version Now Available</title>
  <link>https://www.construx.com/10x_Software_Development/Technical_Debt_Webinar_Archive_Version_Now_Available/?blogid=23485</link>
  <description><![CDATA[<p>Last week’s webinar on technical debt is now <a href="http://w.on24.com/r.htm?e=347946&amp;s=1&amp;k=906C459BBBF4F99D80792A52857B7F8A">available for download</a>. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-09-27T13:42:17Z</dc:date>
  <content:encoded><![CDATA[<p>Last week’s webinar on technical debt is now <a href="http://www.construx.com/Resources/webinar/managing_technical_debt/">available for download</a>. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Managing_Technical_Debt__Free_Webinar/?blogid=23485">
  <title>Managing Technical Debt: Free Webinar</title>
  <link>https://www.construx.com/10x_Software_Development/Managing_Technical_Debt__Free_Webinar/?blogid=23485</link>
  <description><![CDATA[<p>I’ll be giving a free webinar on Managing Technical Debt on September 21, 2011 at 10:00 AM Pacific Time. Here’s the registration link: <a href="http://adtmag.com/webcasts/2011/08/construx-managing-technical-debt.aspx?partnerref=con5">http://adtmag.com/webcasts/2011/08/construx-managing-technical-debt.aspx?partnerref=con5</a></p>
<p><br />Here’s a brief overview:</p>
<p><br />“Technical Debt” refers to delayed technical work that is incurred when technical short cuts are taken, usually in pursuit of calendar-driven software schedules. Just like financial debt, some technical debts can serve valuable business purposes. Other technical debts are simply counterproductive. In this webinar, Steve McConnell explains in detail the different types of technical debt, when organizations should and shouldn’t take them on, and best practices in managing, tracking and paying down debt. You’ll gain insights into how to use technical debt strategically and how to keep technical and business staff involved in the process. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-09-08T17:15:54Z</dc:date>
  <content:encoded><![CDATA[<p>I’ll be giving a free webinar on Managing Technical Debt on September 21, 2011 at 10:00 AM Pacific Time. Here’s the registration link: <a href="http://adtmag.com/webcasts/2011/08/construx-managing-technical-debt.aspx?partnerref=con5">http://adtmag.com/webcasts/2011/08/construx-managing-technical-debt.aspx?partnerref=con5</a></p>
<p><br /><strong>Here’s a brief overview:</strong><br />“Technical Debt” refers to delayed technical work that is incurred when technical short cuts are taken, usually in pursuit of calendar-driven software schedules. Just like financial debt, some technical debts can serve valuable business purposes. Other technical debts are simply counterproductive. In this webinar, Steve McConnell explains in detail the different types of technical debt, when organizations should and shouldn’t take them on, and best practices in managing, tracking and paying down debt. You’ll gain insights into how to use technical debt strategically and how to keep technical and business staff involved in the process. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Construx_Executive_Summit_2011__Software_Thought_Leaders/?blogid=23485">
  <title>Construx Executive Summit 2011: Software Thought Leaders</title>
  <link>https://www.construx.com/10x_Software_Development/Construx_Executive_Summit_2011__Software_Thought_Leaders/?blogid=23485</link>
  <description><![CDATA[<p>Our 2011 Software Executive Summit registration is now open. We have an early bird registration special of $1000 off through August 15. <a href="http://www.construx.com/summit">Register today</a>!</p>
<div class="user-contributed"><p class="summary">Our speaker focus this year is Software Thought Leaders, and once again we have an amazing lineup. We have the father of evolutionary development (Tom Gilb), inventor of the wiki (Ward Cunningham), creator of the CMM and People CMM (Bill Curtis), creator of the 4+1 architecture view and RUP (Philippe Kruchten), and Google"s leading test director (James Whittaker). Please see the <a href="http://www.construx.com/summit">Summit website</a> for more details. </p>
<p class="summary">I look forward to seeing you in October!</p>
</div>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-07-29T08:51:00Z</dc:date>
  <content:encoded><![CDATA[<p>Our 2011 Software Executive Summit registration is now open. We have an early bird registration special of $1000 off&#160;through August 15. <a href="https://www.construx.com/Summit_Registration/">Register today</a>!</p>
<p>Our speaker focus this year is Software Thought Leaders, and once again we have an amazing lineup. We have the father of evolutionary development (Tom Gilb), inventor of the wiki (Ward Cunningham), creator of the CMM and People CMM (Bill Curtis), creator of the 4+1 architecture view and RUP (Philippe Kruchten), and Google's leading test director (James Whittaker). Please see the <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">Summit website</a> for more details. </p>
<p>I look forward to seeing you in October!</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/10_Deadly_Sins_of_Software_Estimation__Free_Webinar/?blogid=23485">
  <title>10 Deadly Sins of Software Estimation: Free Webinar</title>
  <link>https://www.construx.com/10x_Software_Development/10_Deadly_Sins_of_Software_Estimation__Free_Webinar/?blogid=23485</link>
  <description><![CDATA[<p>I&amp;rsquo;ll be giving a free webinar on the 10 Deadly Sins of Software Estimation on April 28, 2011 at 10:00 AM Pacific Time. Here&amp;rsquo;s a link to sign up for it: <a href="http://adtmag.com/webcasts/2011/03/construx-10-deadly-sins-of-software-estimation.aspx?partnerref=con4">http://adtmag.com/webcasts/2011/03/construx-10-deadly-sins-of-software-estimation.aspx?partnerref=con4</a>. </p>
<p>Here&amp;rsquo;s the overview:</p>
<p>The average project overruns its budget and schedule estimates by 50-80 percent, but in practice little work is done that could truly be called "estimation." Many projects are scheduled using a combination of legitimate business targets and liberal doses of wishful thinking. In this talk, I will present 10 of the worst ways estimates go wrong and time-tested rules of thumb for dramatically improving estimation accuracy.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-04-12T12:36:00Z</dc:date>
  <content:encoded><![CDATA[<p>I&amp;rsquo;ll be giving a free webinar on the 10 Deadly Sins of Software Estimation on April 28, 2011 at 10:00 AM Pacific Time. Here&amp;rsquo;s a link to sign up for it: <a href="http://adtmag.com/webcasts/2011/03/construx-10-deadly-sins-of-software-estimation.aspx?partnerref=con4">http://adtmag.com/webcasts/2011/03/construx-10-deadly-sins-of-software-estimation.aspx?partnerref=con4</a>. </p>
<p>Here&amp;rsquo;s the overview:</p>
<p>The average project overruns its budget and schedule estimates by 50-80 percent, but in practice little work is done that could truly be called "estimation." Many projects are scheduled using a combination of legitimate business targets and liberal doses of wishful thinking. In this talk, I will present 10 of the worst ways estimates go wrong and time-tested rules of thumb for dramatically improving estimation accuracy.</p>]]></content:encoded>
 </item>
 <item rdf:about="/I_will_be_Giving_a_Keynote_at_the_Scrum_Alliance_Scrum_Gathering_May_17_2011/?blogid=23485">
  <title>I’ll be Giving a Keynote at the Scrum Alliance’s Scrum Gathering May 17, 2011</title>
  <link>https://www.construx.com/I_will_be_Giving_a_Keynote_at_the_Scrum_Alliance_Scrum_Gathering_May_17_2011/?blogid=23485</link>
  <description><![CDATA[<p>The Scrum Alliance <a href="http://www.scrumalliance.org/events/285-seattle" target="_blank">Scrum Gathering conference</a> is in Seattle this year, May 16-18, 2011. I’ll be giving the morning keynote on the second day. I’m excited to be able to share some of the details of Construx’s experiences helping organizations move to organization-wide Scrum. Here are the details about my talk:</p>
<p>KEYNOTE: THE JOURNEY TO ORGANIZATION-WIDE SCRUM</p>
<p>Scrum practitioners know what a successful Scrum project looks like. After a few successful pilot projects, many organizations struggle when they try to roll out Scrum more broadly. What does it take to roll out Scrum organization-wide? How much does by-the-book Scrum change, and what stays the same? Where do you draw the line between ScrumBut vs. necessary adaptation? What are the common stumbling blocks, and how do you overcome them? Who has to be involved?</p>
<p>In this presentation, award-winning author Steve McConnell shares a typical organization’s gap analysis between small-pilot-project- success and consistent-large-project-success. He describes the work needed from technical contributors, technical leaders, executive managers, and other business partners to implement Scrum. And he describes the path that has allowed Construx’s clients to realize the benefits of Scrum in larger teams, geographically distributed teams, and more complex organizations. </p>
<p>Here’s the <a href="http://www.scrumalliance.org/events/285-seattle" target="_blank">info on the conference</a>. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-03-30T18:55:50Z</dc:date>
  <content:encoded><![CDATA[<p>The Scrum Alliance <a href="http://www.scrumalliance.org/events/285-seattle" target="_blank">Scrum Gathering conference</a> is in Seattle this year, May 16-18, 2011. I’ll be giving the morning keynote on the second day. I’m excited to be able to share some of the details of Construx’s experiences helping organizations move to organization-wide Scrum. Here are the details about my talk:</p>
<p>KEYNOTE: THE JOURNEY TO ORGANIZATION-WIDE SCRUM</p>
<p>Scrum practitioners know what a successful Scrum project looks like. After a few successful pilot projects, many organizations struggle when they try to roll out Scrum more broadly. What does it take to roll out Scrum organization-wide? How much does by-the-book Scrum change, and what stays the same? Where do you draw the line between ScrumBut vs. necessary adaptation? What are the common stumbling blocks, and how do you overcome them? Who has to be involved?</p>
<p>In this presentation, award-winning author Steve McConnell shares a typical organization’s gap analysis between small-pilot-project- success and consistent-large-project-success. He describes the work needed from technical contributors, technical leaders, executive managers, and other business partners to implement Scrum. And he describes the path that has allowed Construx’s clients to realize the benefits of Scrum in larger teams, geographically distributed teams, and more complex organizations. </p>
<p>Here’s the <a href="http://www.scrumalliance.org/events/285-seattle" target="_blank">info on the conference</a>. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/New_Software_Estimation_Survey/?blogid=23485">
  <title>New Software Estimation Survey</title>
  <link>https://www.construx.com/10x_Software_Development/New_Software_Estimation_Survey/?blogid=23485</link>
  <description><![CDATA[<p>I’m working with Ryan Nelson and Mike Morris at University of Virginia to conduct a new survey of software estimation in practice. If you can take just a few minutes to answer some survey questions, this will help us get an update on the kinds of estimation practices people are actually using today. </p>
<p>Here’s the link to the survey: <a href="http://www.surveymonkey.com/s/uvaestimationsurvey">http://www.surveymonkey.com/s/uvaestimationsurvey</a></p>
<p>Thanks for your participation. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-03-21T14:16:13Z</dc:date>
  <content:encoded><![CDATA[<p>I’m working with Ryan Nelson and Mike Morris at University of Virginia to conduct a new survey of software estimation in practice. If you can take just a few minutes to answer some survey questions, this will help us get an update on the kinds of estimation practices people are actually using today. </p>
<p>Here’s the link to the survey: <a href="http://www.surveymonkey.com/s/uvaestimationsurvey">http://www.surveymonkey.com/s/uvaestimationsurvey</a></p>
<p>Thanks for your participation. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/My_Books_Are_Now_Available_in_Kindle,_PDF,_and_Other_Electronic_Formats/?blogid=23485">
  <title>My Books Are Now Available in Kindle, PDF, and Other Electronic Formats</title>
  <link>https://www.construx.com/10x_Software_Development/My_Books_Are_Now_Available_in_Kindle,_PDF,_and_Other_Electronic_Formats/?blogid=23485</link>
  <description><![CDATA[<p><span>Readers have asked for years for electronic versions of my books, and I&amp;rsquo;m happy to say that electronic versions are now available for all of my Microsoft Press books. </span></p>
<ul>
<li><span><span style="FONT-FAMILY: Arial"><strong>Software Estimation</strong> from Amazon.com in </span></span><a href="http://www.amazon.com/exec/obidos/ISBN=0735605351/stevemcconnelcon/"><span>paperback</span></a><span> or </span><a href="http://www.amazon.com/Software-Estimation-Demystifying-Black-ebook/dp/B0043EWTMG/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2"><span>Kindle</span></a><span> formats or from O"Reilly in various other </span><a href="http://oreilly.com/catalog/9780735605350/"><span>Ebook</span></a><span> formats (including pdf)</span></li>
<li><span><span style="FONT-FAMILY: Arial"><strong>Code Complete, 2nd Edition </strong>from Amazon.com in </span></span><a href="http://www.amazon.com/exec/obidos/ISBN=0735619670/stevemcconnelcon/"><span>paperback</span></a><span> or </span><a href="http://www.amazon.com/Code-Complete-ebook/dp/B004OR1XGK/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2"><span>Kindle edition</span></a><span> or O"Reilly in various </span><a href="http://oreilly.com/catalog/9780735619678"><span>Ebook formats</span></a><span> (including pdf)</span></li>
<li><span><span style="FONT-FAMILY: Arial"><strong>Rapid Development</strong> from Amazon.com in </span></span><a href="http://www.amazon.com/exec/obidos/ISBN=1556159005/stevemcconnelconA/"><span>paperbook</span></a><span> or </span><a href="http://www.amazon.com/Rapid-Development-ebook/dp/B004OR1XXS/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2"><span>Kindle</span></a><span> formats or from O"Reilly in various </span><a href="http://oreilly.com/catalog/9781556159008/"><span>Ebook</span></a><span> formats (including pdf)</span></li>
<li><span><span style="FONT-FAMILY: Arial"><strong>Software Project Survival Guide </strong>from Amazon.com in </span></span><a href="http://www.amazon.com/exec/obidos/ISBN=1572316217/stevemcconnelconA/"><span>paperback</span></a><span> or </span><a href="http://www.amazon.com/Software-Project-Survival-Guide-ebook/dp/B0043M58VW/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2"><span>Kindle</span></a><span> formats or from O"Reilly in various other </span><a href="http://oreilly.com/catalog/9781572316218/"><span>Ebook</span></a><span> formats (including pdf)</span></li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-03-07T13:47:00Z</dc:date>
  <content:encoded><![CDATA[<p>Readers have asked for years for electronic versions of my books, and I’m happy to say that electronic versions are now available for all of my Microsoft Press books.</p>
<ul>
<li><strong>Software Estimation</strong> from Amazon.com in <a href="http://www.amazon.com/exec/obidos/ISBN=0735605351/stevemcconnelcon/">paperback</a> or <a href="http://www.amazon.com/Software-Estimation-Demystifying-Black-ebook/dp/B0043EWTMG/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2">Kindle</a> formats or from O'Reilly in various other <a href="http://oreilly.com/catalog/9780735605350/">Ebook</a> formats (including pdf)</li>
<li><strong>Code Complete, 2nd Edition </strong>from Amazon.com in <a href="http://www.amazon.com/exec/obidos/ISBN=0735619670/stevemcconnelcon/">paperback</a> or <a href="http://www.amazon.com/Code-Complete-ebook/dp/B004OR1XGK/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2">Kindle edition</a> or O'Reilly in various <a href="http://oreilly.com/catalog/9780735619678">Ebook formats</a> (including pdf)</li>
<li><strong>Rapid Development</strong> from Amazon.com in <a href="http://www.amazon.com/exec/obidos/ISBN=1556159005/stevemcconnelconA/">paperbook</a> or <a href="http://www.amazon.com/Rapid-Development-ebook/dp/B004OR1XXS/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2">Kindle</a> formats or from O'Reilly in various <a href="http://oreilly.com/catalog/9781556159008/">Ebook</a> formats (including pdf)</li>
<li><strong>Software Project Survival Guide </strong>from Amazon.com in <a href="http://www.amazon.com/exec/obidos/ISBN=1572316217/stevemcconnelconA/">paperback</a> or <a href="http://www.amazon.com/Software-Project-Survival-Guide-ebook/dp/B0043M58VW/ref=tmm_kin_title_0?ie=UTF8&amp;m=AG56TWVU5XWC2">Kindle</a> formats or from O'Reilly in various other <a href="http://oreilly.com/catalog/9781572316218/">Ebook</a> formats (including pdf)</li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/Why_Didnot_I_Like_The_Social_Network/?blogid=23485">
  <title>Why Didn’t I Like “The Social Network?”</title>
  <link>https://www.construx.com/Why_Didnot_I_Like_The_Social_Network/?blogid=23485</link>
  <description><![CDATA[<p>The title of this blog entry is an actual question. I really don’t understand why I didn’t like “The Social Network” more than I did. </p>
<p>Based on stellar reviews on <a href="http://www.rottentomatoes.com/m/the-social-network/">Rotten Tomatoes</a> and a good price on Amazon, I preordered The Social Network on blu-ray, which I watched Friday night. </p>
<p>This movie has many elements I should like. The screen play was written by Aaron Sorkin, who has written some of my favorite movies (A Few Good Men, The American President, Charlie Wilson’s War). The subject is the area I’ve spent my whole career in—the software industry, and it zeroes in on a specialty area that’s even more interesting: factors that contribute to success in startup environments. The movie steered clear of my biggest gripe about computer-related movies, which is focusing on hackers to the exclusion of everything else. The dialog was fast and witty. The acting was good across the board. Jesse Eisenberg portrayed Mark Zuckerberg as an intriguing, complex character. So why didn’t this movie work for me? </p>
<p>Roger Eberts’ <a href="http://rogerebert.suntimes.com/apps/pbcs.dll/article?AID=/20100929/REVIEWS/100929984">review</a> gives an interesting non-programmer perspective. Ebert says,</p>
<blockquote><p>“It is said to be impossible to make a movie about a writer, because how can you show him only writing? It must also be impossible to make a movie about a computer programmer, because what is programming but writing in a language few people in the audience know? …</p>
<p>“The Social Network” is a great film not because of its dazzling style or visual cleverness, but because it is splendidly well-made. Despite the baffling complications of computer programming, web strategy and big finance, Aaron Sorkin"s screenplay makes it all clear.”</p>
</blockquote>
<p>I think what Ebert is saying (reading between the lines) is that computer programming is so inscrutable to most people, that ANY insight into how it’s done will be of interest to people – in part because 20-somethings can make billions doing it. </p>
<p>I think part of my issue is that I already knew that 20-somethings could make billions creating software. I’ve lived next door to Microsoft and Amazon my whole adult life and have been buying Dell computers as long as I can remember. This is not news to anyone I know. In the closing frames of the movie, when the final crawl tells the end of the story (“Facebook is worth $25 billion”) I felt like saying, “Was that really the point? EVERYBODY knew that before this movie came out.” </p>
<p>Another reaction I had was that, if we didn’t already know that Facebook was a true story and feel like it was giving us the inside view of how Facebook was created, the movie would not be interesting at all. The problem is, the most emotionally appealing parts of the movie are fictitious. The movie opens with a dialog between Zuckerberg and Erica, his soon-to-be ex-girlfriend from Boston University. Several times throughout the movie we see Zuckerberg reflecting wistfully about Erica. And in the final scene in the movie [spoiler alert], a lonely-looking Zuckerberg attempts to reconnect with Erica on Facebook. So there’s a dramatic symmetry from the beginning of the movie to the end, which is nice, except for the fact that it’s entirely fabricated. The was no girlfriend from BU, and in real life Zuckerberg has been with the same girlfriend from the year he began Facebook to the present. </p>
<p>Eisenberg’s portrayal of the intensely focused, socially awkward megalomaniac software genius was spot on. For people who haven’t previously been exposed to this type of person (like Roger Ebert), apparently that character portrayal was interesting enough to carry the movie. For me it wasn’t. I’ve spent the last 25 years around people like that, and to me that set of attributes is common enough to have become an archetype. Accurate portrayal of the archetype is a good place to start but is not in itself sufficient to make a great movie. I’m sure this archetype is more familiar to me than to much of the non-software public, and so maybe that’s the reason the movie was more interesting to other people than it was to me. </p>
<p>There were other story lines that  could have been interesting but that were left undeveloped. Eduardo Saverin, Zuckerberg’s college-buddy/CFO invests seed money but seems to lack the commitment to Facebook that many of the other players had. There is a potentially interesting exploration of the nature of entrepreneurial commitment, why some people have it and some people don’t, what it really takes to succeed as an entrepreneur, and whether in the end it’s all worth it. But that’s left unexplored. </p>
<p>The 3-way interactions between Sean Parker and Eduardo Saverin looked for awhile like they might develop into an interesting story. I was an expert witness in lawsuit that had a similar triangular between two founders and a third party, and the pathologies were fascinating. But the movie just scrapes the surface of those topics too. </p>
<p>There was also a potentially interesting investigation of, Who’s idea was Facebook really? What is the nature of intellectual property ownership? What gives someone the right to call something “my idea?” There was a subplot in which The Winklevoss twins hire Zuckerberg to create a Harvard-only networking site that was intended to be in the same social networking ballpark as Facebook. Zuckerberg goes off and creates Facebook instead of working on the Winklevoss’s project. That could have been used to explore the question of, What does it really mean to come up with an idea? The closest the movie gets to exploring these issues is a comment from Zuckerberg about “A guy who makes a nice chair doesn"t owe money to everyone who has ever built a chair.” </p>
<p>The Winklevoss’s are drawn somewhat sympathetically, but as portrayed in the movie I think they represent capitalism at its worst. Their idea is half-formed. They have no ability to implement the idea themselves. The details they present to Zuckerberg get changed into something unrecognizable as he creates Facebook. If Zuckerberg had implemented their idea as they described it to him, it would have gone nowhere. The Winklevoss’s are obviously capable of doing hard work (they’re Olympic rowers), but they’re not capable of doing the specific type of hard work required to create Facebook. Nonetheless, they think that because they hired Zuckerberg to work on a social networking project that Facebook should be theirs. They hired him to create a nice chair. He never got around to creating their chair, and during the time he was supposed to be creating their chair he created an entirely new and different kind of chair that revolutionized the furniture industry. Does hiring him to build one kind of chair somehow give them a right to the amazing chair? The movie doesn’t explore that question. </p>
<p>Personally, I think that good ideas are a dime a dozen. The talent, energy and will to covert an idea into an appealing product is infinitely rarer. </p>
<p>In the absence of any story line that develops to any significant degree, what we’re left with is a lot of witty dialog that adds up to not much of a story, and an interesting character study of one of the 21st century’s well-known geniuses. That character study could provide interesting insights, except that the most interesting details were fabricated, which leaves us with no real insights after all. And that leaves us a movie that has very little to offer other than the fact that, as Ebert says, it was “splendidly well made.”  For me that wasn’t enough. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-02-07T16:08:41Z</dc:date>
  <content:encoded><![CDATA[<p>The title of this blog entry is an actual question. I really don’t understand why I didn’t like “The Social Network” more than I did.</p>
<p>Based on stellar reviews on <a href="http://www.rottentomatoes.com/m/the-social-network/">Rotten Tomatoes</a> and a good price on Amazon, I preordered The Social Network on blu-ray, which I watched Friday night.</p>
<p>This movie has many elements I should like. The screen play was written by Aaron Sorkin, who has written some of my favorite movies (A Few Good Men, The American President, Charlie Wilson’s War). The subject is the area I’ve spent my whole career in—the software industry, and it zeroes in on a specialty area that’s even more interesting: factors that contribute to success in startup environments. The movie steered clear of my biggest gripe about computer-related movies, which is focusing on hackers to the exclusion of everything else. The dialog was fast and witty. The acting was good across the board. Jesse Eisenberg portrayed Mark Zuckerberg as an intriguing, complex character. So why didn’t this movie work for me?</p>
<p>Roger Eberts’ <a href="http://rogerebert.suntimes.com/apps/pbcs.dll/article?AID=/20100929/REVIEWS/100929984">review</a> gives an interesting non-programmer perspective. Ebert says,</p>
<p>“It is said to be impossible to make a movie about a writer, because how can you show him only writing? It must also be impossible to make a movie about a computer programmer, because what is programming but writing in a language few people in the audience know? …</p>
<p>“The Social Network” is a great film not because of its dazzling style or visual cleverness, but because it is splendidly well-made. Despite the baffling complications of computer programming, web strategy and big finance, Aaron Sorkin's screenplay makes it all clear.”</p>
<p>I think what Ebert is saying (reading between the lines) is that computer programming is so inscrutable to most people, that ANY insight into how it’s done will be of interest to people – in part because 20-somethings can make billions doing it.</p>
<p>I think part of my issue is that I already knew that 20-somethings could make billions creating software. I’ve lived next door to Microsoft and Amazon my whole adult life and have been buying Dell computers as long as I can remember. This is not news to anyone I know. In the closing frames of the movie, when the final crawl tells the end of the story (“Facebook is worth $25 billion”) I felt like saying, “Was that really the point? EVERYBODY knew that before this movie came out.”</p>
<p>Another reaction I had was that, if we didn’t already know that Facebook was a true story and feel like it was giving us the inside view of how Facebook was created, the movie would not be interesting at all. The problem is, the most emotionally appealing parts of the movie are fictitious. The movie opens with a dialog between Zuckerberg and Erica, his soon-to-be ex-girlfriend from Boston University. Several times throughout the movie we see Zuckerberg reflecting wistfully about Erica. And in the final scene in the movie [spoiler alert], a lonely-looking Zuckerberg attempts to reconnect with Erica on Facebook. So there’s a dramatic symmetry from the beginning of the movie to the end, which is nice, except for the fact that it’s entirely fabricated. The was no girlfriend from BU, and in real life Zuckerberg has been with the same girlfriend from the year he began Facebook to the present.</p>
<p>Eisenberg’s portrayal of the intensely focused, socially awkward megalomaniac software genius was spot on. For people who haven’t previously been exposed to this type of person (like Roger Ebert), apparently that character portrayal was interesting enough to carry the movie. For me it wasn’t. I’ve spent the last 25 years around people like that, and to me that set of attributes is common enough to have become an archetype. Accurate portrayal of the archetype is a good place to start but is not in itself sufficient to make a great movie. I’m sure this archetype is more familiar to me than to much of the non-software public, and so maybe that’s the reason the movie was more interesting to other people than it was to me.</p>
<p>There were other story lines that  could have been interesting but that were left undeveloped. Eduardo Saverin, Zuckerberg’s college-buddy/CFO invests seed money but seems to lack the commitment to Facebook that many of the other players had. There is a potentially interesting exploration of the nature of entrepreneurial commitment, why some people have it and some people don’t, what it really takes to succeed as an entrepreneur, and whether in the end it’s all worth it. But that’s left unexplored.</p>
<p>The 3-way interactions between Sean Parker and Eduardo Saverin looked for awhile like they might develop into an interesting story. I was an expert witness in lawsuit that had a similar triangular between two founders and a third party, and the pathologies were fascinating. But the movie just scrapes the surface of those topics too.</p>
<p>There was also a potentially interesting investigation of, Who’s idea was Facebook really? What is the nature of intellectual property ownership? What gives someone the right to call something “my idea?” There was a subplot in which The Winklevoss twins hire Zuckerberg to create a Harvard-only networking site that was intended to be in the same social networking ballpark as Facebook. Zuckerberg goes off and creates Facebook instead of working on the Winklevoss’s project. That could have been used to explore the question of, What does it really mean to come up with an idea? The closest the movie gets to exploring these issues is a comment from Zuckerberg about “A guy who makes a nice chair doesn't owe money to everyone who has ever built a chair.”</p>
<p>The Winklevoss’s are drawn somewhat sympathetically, but as portrayed in the movie I think they represent capitalism at its worst. Their idea is half-formed. They have no ability to implement the idea themselves. The details they present to Zuckerberg get changed into something unrecognizable as he creates Facebook. If Zuckerberg had implemented their idea as they described it to him, it would have gone nowhere. The Winklevoss’s are obviously capable of doing hard work (they’re Olympic rowers), but they’re not capable of doing the specific type of hard work required to create Facebook. Nonetheless, they think that because they hired Zuckerberg to work on a social networking project that Facebook should be theirs. They hired him to create a nice chair. He never got around to creating their chair, and during the time he was supposed to be creating their chair he created an entirely new and different kind of chair that revolutionized the furniture industry. Does hiring him to build one kind of chair somehow give them a right to the amazing chair? The movie doesn’t explore that question.</p>
<p>Personally, I think that good ideas are a dime a dozen. The talent, energy and will to covert an idea into an appealing product is infinitely rarer.</p>
<p>In the absence of any story line that develops to any significant degree, what we’re left with is a lot of witty dialog that adds up to not much of a story, and an interesting character study of one of the 21st century’s well-known geniuses. That character study could provide interesting insights, except that the most interesting details were fabricated, which leaves us with no real insights after all. And that leaves us a movie that has very little to offer other than the fact that, as Ebert says, it was “splendidly well made.”  For me that wasn’t enough.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Technical_Debt_Webinar_Recording_is_Now_Available/?blogid=23485">
  <title>Technical Debt Webinar Recording is Now Available</title>
  <link>https://www.construx.com/10x_Software_Development/Technical_Debt_Webinar_Recording_is_Now_Available/?blogid=23485</link>
  <description><![CDATA[<p>View it <a href="http://www.construx.com/Page.aspx?hid=3277">here</a> (free membership required to view). </p>
<h3>Webinar - Managing Technical Debt</h3>
<p>"Technical Debt" refers to delayed technical work that is incurred when technical short cuts are taken, usually in pursuit of calendar-driven software schedules. Technical debt is inherently neither good nor bad: Just like financial debt, some technical debts can serve valuable business purposes. Other technical debts are simply counterproductive. However, just as with the financial kind, it"s important to know what you"re getting into. </p>
<p>In this one-hour webinar, noted author and software engineer Steve McConnell explains in detail the different types of technical debt, when organizations should and shouldn"t take them on, and best practices in managing, tracking and paying down debt. You"ll gain insights into how to use technical debt strategically and how to keep technical and business staff involved in the process.  </p>
<p>View it <a href="http://www.construx.com/Page.aspx?hid=3277">here</a> (free membership required to view).</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-02-07T13:30:00Z</dc:date>
  <content:encoded><![CDATA[<p>View it <a href="https://www.construx.com/Resources/Webinar/Managing_Technical_Debt/">here</a> (free membership required to view).</p>
<p><strong>Webinar - Managing Technical Debt</strong></p>
<p>"Technical Debt" refers to delayed technical work that is incurred when technical short cuts are taken, usually in pursuit of calendar-driven software schedules. Technical debt is inherently neither good nor bad: Just like financial debt, some technical debts can serve valuable business purposes. Other technical debts are simply counterproductive. However, just as with the financial kind, it's important to know what you're getting into.</p>
<p>In this one-hour webinar, noted author and software engineer Steve McConnell explains in detail the different types of technical debt, when organizations should and shouldn't take them on, and best practices in managing, tracking and paying down debt. You'll gain insights into how to use technical debt strategically and how to keep technical and business staff involved in the process.</p>
<p>View it <a href="https://www.construx.com/Resources/Webinar/Managing_Technical_Debt/">here</a> (free membership required to view).</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/2011_Executive_Discussion_Topics_Announced/?blogid=23485">
  <title>2011 Executive Discussion Topics Announced</title>
  <link>https://www.construx.com/10x_Software_Development/2011_Executive_Discussion_Topics_Announced/?blogid=23485</link>
  <description><![CDATA[<p>Here are the ECSE group&amp;rsquo;s Discussion topics for 2011: </p>
<table border="0" cellspacing="0" cellpadding="0">
<tbody>
<tr height="25">
<td width="17"> </td>
<td width="150">January</td>
<td width="476">Job Market 2011: Compensation, Recruiting, and Retention Issues </td>
</tr>
<tr height="25">
<td width="18"> </td>
<td width="150">February</td>
<td width="481">Organizational Structures </td>
</tr>
<tr height="25">
<td width="19"> </td>
<td width="150">March </td>
<td width="480">Motivating Software Development Teams </td>
</tr>
<tr height="25">
<td width="20"> </td>
<td width="150">April</td>
<td width="478">Issues for Software Development in Hardware-Centric Environments </td>
</tr>
<tr height="25">
<td width="20"> </td>
<td width="150">May</td>
<td width="478">The Cloud"s Impact on Software Development</td>
</tr>
<tr height="25">
<td width="21"> </td>
<td width="150">June</td>
<td width="477">Scaling Agile</td>
</tr>
<tr height="25">
<td width="22"> </td>
<td width="150">July</td>
<td width="476">Software Security Issues </td>
</tr>
<tr height="25">
<td width="23"> </td>
<td width="150">August</td>
<td width="475"><em>Summer Break</em></td>
</tr>
<tr height="25">
<td width="24"> </td>
<td width="150">September</td>
<td width="474">Working with Software Business Partners</td>
</tr>
<tr height="25">
<td width="24"> </td>
<td width="150">October</td>
<td width="474">Managing Maintenance and Sustaining Engineering</td>
</tr>
<tr height="25">
<td width="24"> </td>
<td width="150">November</td>
<td width="474">Working with the Data Center</td>
</tr>
<tr height="25">
<td width="24"> </td>
<td width="150">December</td>
<td width="474">Design-Driven vs. Engineering-Driven Software Product Development </td>
</tr>
</tbody>
</table>
<p>In-person ECSE discussions in Bellevue are open to Seattle-area software executives who have multi-project span of responsibility and nominally oversee 100 or more technical staff. If you&amp;rsquo;re interested in participating, please <a href="mailto://stevemcc@construx.com?subject=Interested in ECSE Meetings">contact me</a>. Dial-in discussions are open to executives world-wide who have attended Construx&amp;rsquo;s annual <a href="http://www.construx.com/summit">Software Executive Summit</a> or by invitation to qualified executives on a space-available basis. If you or an executive you work with is interested in either of these groups, please <a href="mailto://stevemcc@construx.com?subject=Interested in ECSE Meetings">get in touch</a>. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-02-03T16:54:00Z</dc:date>
  <content:encoded><![CDATA[<p>Here are the ECSE group's Discussion topics for 2011:</p>
<!-- activity start --><div class="resourceBlock clearfix"><div class="resourceInnerBlock clearfix"><div class="rbLeft">January </div>
<div class="rbRight"><ul>
<li>Job Market 2011: Compensation, Recruiting, and Retention Issues </li>
</ul>
</div>
<div class="rbLeft">February </div>
<div class="rbRight"><ul>
<li>Organizational Structures </li>
</ul>
</div>
<div class="rbLeft">March </div>
<div class="rbRight"><ul>
<li>Motivating Software Development Teams </li>
</ul>
</div>
<div class="rbLeft">April </div>
<div class="rbRight"><ul>
<li>Issues for Software Development in Hardware-Centric Environments </li>
</ul>
</div>
<div class="rbLeft">May </div>
<div class="rbRight"><ul>
<li>The Cloud's Impact on Software Development </li>
</ul>
</div>
<div class="rbLeft">June </div>
<div class="rbRight"><ul>
<li>Scaling Agile </li>
</ul>
</div>
<div class="rbLeft">July </div>
<div class="rbRight"><ul>
<li>Software Security Issues </li>
</ul>
</div>
<div class="rbLeft">August </div>
<div class="rbRight"><ul>
<li><em>Summer Break</em>  </li>
</ul>
</div>
<div class="rbLeft">September </div>
<div class="rbRight"><ul>
<li>Working with Software Business Partners </li>
</ul>
</div>
<div class="rbLeft">October </div>
<div class="rbRight"><ul>
<li>Managing Maintenance and Sustaining Engineering </li>
</ul>
</div>
<div class="rbLeft">November </div>
<div class="rbRight"><ul>
<li>Working with the Data Center </li>
</ul>
</div>
<div class="rbLeft">December </div>
<div class="rbRight"><ul>
<li>Design-Driven vs. Engineering-Driven Software Product Development </li>
</ul>
</div>
</div>
</div>
<p>In-person ECSE discussions in Bellevue are open to Seattle-area software executives who have multi-project span of responsibility and nominally oversee 100 or more technical staff. If you're interested in participating, please <a href="mailto://stevemcc@construx.com?subject=Interested%20in%20ECSE%20Meetings">contact me</a>. Dial-in discussions are open to executives world-wide who have attended Construx's annual <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">Software Executive Summit</a> or by invitation to qualified executives on a space-available basis. If you or an executive you work with is interested in either of these groups, please <a href="mailto://stevemcc@construx.com?subject=Interested%20in%20ECSE%20Meetings">get in touch</a>.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/10x_Productivity_Myths__Where_s_the_10x_Difference_in_Compensation_/?blogid=23485">
  <title>10x Productivity Myths: Where’s the 10x Difference in Compensation?</title>
  <link>https://www.construx.com/10x_Software_Development/10x_Productivity_Myths__Where_s_the_10x_Difference_in_Compensation_/?blogid=23485</link>
  <description><![CDATA[<p>http://www.construx.com/blogs/stevemcc/archive/2011/01/09/origins-of-10x-how-valid-is-the-underlying-research.aspx</p>
<p style="margin: 0in 0in 0pt;">In response to my <a href="http://www.construx.com/blogs/stevemcc/archive/2011/01/09/origins-of-10x-how-valid-is-the-underlying-research.aspx">recent blog post</a> on the research support for 10x productivity differences among programmers, Pete McBreen made the following <a href="http://www.improvingwetware.com/2011/01/11/in-validating-the-10x-productivity-difference-claim">comment</a>: </p>
<blockquote><p>"One point in his article that McConnell did not address--<strong>programmer compensation does not vary accordingly</strong>. This is a telling point--if the difference is productivity can be 10X, why is it that salaries rarely fall outside the 2X range for experienced developers?" [emphasis in original]<span></span> <span></span></p>
</blockquote>
<p style="margin: 0in 0in 0pt;">This is a good question. It&amp;rsquo;s timely because the Software Engineering Productivity group on LinkedIn has recently had a 130-comment discussion on the question of &amp;ldquo;Should pay be tied directly to productivity?&amp;rdquo; It"s also a question that I wrestled with personally for about the first 10 years of my career. Indeed, it"s part of the original reason I decided to became self employed back in 1989 and eventually founded my own company in 1996. <span></span><span></span></p>
<h3><span>The Intuitive Version of the Question</span></h3>
<p>I started my personal &amp;ldquo;10x compensation quest&amp;rdquo; from the point of view of, &amp;ldquo;I know I&amp;rsquo;m 3-5x as productive as the guy sitting next to me. Why am I not making 3-5x as much money?&amp;rdquo; Over a period of many years I found that this formulation of the question embodied several assumptions that were na&amp;iuml;ve or just plain wrong from a business perspective. </p>
<h3><span>Six Myths of 10x Compensation</span></h3>
<p>Let&amp;rsquo;s look at each of these myths of 10x compensation. </p>
<p><strong>Myth 1. The guy next to me is getting paid what he&amp;rsquo;s worth. </strong>If I&amp;rsquo;m really 5x as productive as the guy sitting next to me, part of that is that I&amp;rsquo;m really productive, and part of that is the guy next to me is <em>not </em>very productive. Let&amp;rsquo;s say that we&amp;rsquo;re both first-year programmers and both making $65,000 (i.e., pretty typical first-year programmer comp in major markets these days). Me being 5x as productive as the other does not mean I should be making 5 * $65,000. It probably means something more like the other guy should be making $20,000 and I should be making $100,000. Part of the issue is that I&amp;rsquo;m underpaid a little; a bigger part of the issue is the other guy is overpaid <em>a lot</em>. </p>
<p>My personal observation is that average company has something like 20% of its programmers that aren&amp;rsquo;t contributing anything meaningful to the business and whose compensation should really be <em>zero</em>. In many companies, star performers&amp;rsquo; low compensation is essentially subsidizing poor performers&amp;rsquo; salaries. <span></span></p>
<p>Some people think, &amp;ldquo;If I&amp;rsquo;m a 10x programmer, I should be making 10x the average compensation.&amp;rdquo; But the 10x ratio is not 10x from best to <em>average</em>; it&amp;rsquo;s 10x from best to <em>worst</em>. If you think you should be making 10x what the worst programmers make, and the worst programmers should be making <em>nothing</em>, be careful what you wish for! </p>
<p><strong>Myth 2. &amp;ldquo;Programming productivity&amp;rdquo; = &amp;ldquo;value to the business.&amp;rdquo;</strong> When someone says, &amp;ldquo;I&amp;rsquo;m 10x as good a programmer, therefore I should be paid 10x as much,&amp;rdquo; they&amp;rsquo;re assuming that their value to the business is based on their programming capability/contribution. That is part of the story, but not the whole story. Some mediocre programmers might be better at interacting with customers. Some might have better potential to move into management. Some might have less personal output but a wonderfully positive influence on overall team output. There are lots of other factors that influence &amp;ldquo;value to the business&amp;rdquo; besides raw programming output.  </p>
<p><strong>Myth 3. High output should be rewarded with high salary. </strong>What&amp;rsquo;s mythical about this statement depends on understanding the difference between salary and compensation. When a business sets a salary (as opposed to a bonus &amp;ndash; i.e., &amp;ldquo;fixed comp&amp;rdquo; vs. &amp;ldquo;variable comp&amp;rdquo;), the business is recognizing a person&amp;rsquo;s current contribution to the business, and it&amp;rsquo;s also making a calculated bet about the person&amp;rsquo;s contribution to the business in the future and over time. If I&amp;rsquo;m 5x as productive as the next guy this year, there&amp;rsquo;s no guarantee that I&amp;rsquo;ll be 5x as productive again next year. My motivation on the next project could be lower. I could be distracted by new girlfriend, new wife, new baby, parent&amp;rsquo;s health issues, personal health issues, new release of Call of Duty, etc. Most businesses won&amp;rsquo;t lower salaries except in extraordinary circumstances, so businesses are very conservative about increasing their employees salaries. </p>
<p>The same basic reasoning applies to salary offers to new employees. If I&amp;rsquo;m looking at a guy with a 20 year track record of uninterrupted outstanding performance, I&amp;rsquo;ll make one kind of a bet about his future productivity when I offer him a salary. If I&amp;rsquo;m looking at a guy with a 2 year track record, no matter how outstanding those 2 years have been, I&amp;rsquo;ll make a different kind of bet about his future productivity when I offer him a salary. </p>
<p>These issues are related to rewarding output with high salaries. Rewarding high output with high bonuses brings up different issues.   </p>
<p><strong>Myth 4. Businesses try to pay people based on what they&amp;rsquo;re worth to the business. </strong>This is true only in the most approximate sense. I used to think that the ideal business would go through the thought process of, &amp;ldquo;This person is contributing $Y in value to our business, so we can pay them some fraction of Y and still make a profit.&amp;rdquo; That isn&amp;rsquo;t how businesses work. In my experience, businesses don&amp;rsquo;t make any attempt whatsoever to figure out on a person-by-person basis how much each person contributes to the bottom line.  <em>At best</em>, a business might go through an exercise of defining how much each <em>job </em>is worth (not each person) &amp;ndash; but those exercises don&amp;rsquo;t account for whether the person in each job is a 1x performer or a 10x performer. For the reason, any analysis of &amp;ldquo;this job is worth Y&amp;rdquo; without considering the level of performance of the person doing the job is a meaningless exercise. </p>
<p>Since businesses almost never know what a specific person doing a specific job is worth, <em>businesses generally pay people based on their market value, not on any calculation of their monetary contribution to the business. </em>Businesses pay people whatever they need to pay them in order to attract the people they want to attract and retain the people they want to retain. Businesses aren&amp;rsquo;t going to pay any more than they have to to fill any particular job, and so they&amp;rsquo;re not going to pay above market if they don&amp;rsquo;t have to.  If a business can attract and retain a 10x developer for a 2x salary, that&amp;rsquo;s what it will do.  </p>
<p>As McBreen pointed out, the market salary structure for programmers is relatively flat. In the local executive discussion group I host, earlier this month we discussed &amp;ldquo;Job Market 2011,&amp;rdquo; including compensation issues. In our area (Seattle), starting salaries are running about $50-$65K for people with less than 1 year of experience. Top end salaries are in the $125-$150K range for senior, star technical performers with no management responsibilities. McBreen stated that he saw less than a 2x difference in compensation in his area. The situation is actually worse than he described. In our area, there is approximately 2.5x difference <em>in the total range</em>, including <em>from most junior to most senior</em>. </p>
<p><strong>Myth 5. If a business wanted to pay based on productivity, it could measure individual productivity meaningfully enough to support its compensation decisions. </strong>As I discussed in a <a href="http://www.construx.com/blogs/stevemcc/archive/2008/04/09/measuring-productivity-of-individual-programmers.aspx">blog post in early 2008</a>, measuring a 10x productivity difference in a research setting is one thing. Measuring productivity of specific individuals in a live production environment, on an ongoing basis, is a totally different challenge. The research measurement is possible and practical and has been done several times. The live, on-the-job measurement is subject to numerous &amp;ldquo;measurement error&amp;rdquo; issues that in my view make such measurements extremely impractical, if not downright impossible. <span></span><span></span></p>
<p>My first job after college I worked as a programmer at a company that tried to tie pay to productivity. We had a "billing hour bonus." There was a  formula for calculating the bonus, and there were lots of anomalies in the  formula. To get over the anomalies, the boss tweaked the formula almost every  month. By the time I left that company there were 17 variables in the formula  and almost all the programmers thought it was a joke. It mostly rewarded one guy who liked to work long hours, even though we all knew he was the least productive person there (both in terms of individual contribution and in terms of impact on others). <span></span><span></span></p>
<p>If we can&amp;rsquo;t meaningfully measure differences in performance, we&amp;rsquo;re left with more subjective assessments of programmers&amp;rsquo; contributions to the business, which actually is how most businesses operate. <span></span><span></span></p>
<p><strong>Myth 6. Companies don&amp;rsquo;t adjust pay for differences in productivity. </strong><span></span>Good companies do try to recognize differences in productivity. They have technical ladders that parallel their management ladders so that really good technical people can make salaries comparable to managers. <span></span><span></span>Good companies attempt to pay based on very rough approximations of productivity over time. That"s what different pay  grade levels are for, and that"s what performance reviews are for. <span></span><span></span></p>
<p>Your reaction to that is might be something like, "Yeah right. Performance reviews.  Those are a joke. My boss doesn"t really have any idea what I"m doing. I usually write my own review, or my boss just spouts generalities." That"s right. Those are common problems with performance  reviews. And with that common experience, why would anyone think that "measuring productivity" would be any  more reliable than that? We have decades of experience via performance reviews that says it wouldn"t be. <span></span><span></span></p>
<h3>In Summary, Is All This &amp;ldquo;Right,&amp;rdquo; or is This Just the Way it Is? <span></span><span></span><span></span></h3>
<p>I think it&amp;rsquo;s a little of both. I agree with the ideal of matching total compensation (not salary) to performance. But implementing that in practice is terribly challenging. <span></span><span></span>The only practical way I can think of to truly tie pay to performance would be to move all employees to a contractor model and then pay them for well-defined pieces of work on a contract basis, with defined acceptance criteria and so on. I don"t think that"s practical, though, and it would undermine collaborative approaches like Scrum and also create some teamwork dynamics that I personally would rather not deal with. My overall conclusion is that paying for productivity  on any more than a very-rough-approximation basis is a panacea that cannot practically be achieved. <span></span><span></span><span></span></p>
<p>As I&amp;rsquo;ve commented previously, the discrepancy between capability differences and compensation differences does create opportunities for companies that are willing to hire from the top of the talent pool to receive disproportionately greater levels of output in exchange for only modestly higher compensation. </p>
<p>This is not the answer I expected to find when I began asking the question almost 25 years ago, but I can see the reasons for it. Gerald Weinberg describes a pattern he describes as "Things are the way they are because they got that way." I think this is one of those cases. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-01-22T17:42:00Z</dc:date>
  <content:encoded><![CDATA[<p>In response to my <a href="https://www.construx.com/10x_Software_Development/Origins_of_10X_–_How_Valid_is_the_Underlying_Research_/ ">recent blog post</a> on the research support for 10x productivity differences among programmers, Pete McBreen made the following <a href="http://www.improvingwetware.com/2011/01/11/in-validating-the-10x-productivity-difference-claim">comment</a>:</p>
<p>"One point in his article that McConnell did not address--<strong>programmer compensation does not vary accordingly</strong>. This is a telling point--if the difference is productivity can be 10X, why is it that salaries rarely fall outside the 2X range for experienced developers?" [emphasis in original]</p>
<p>This is a good question. It&amp;rsquo;s timely because the Software Engineering Productivity group on LinkedIn has recently had a 130-comment discussion on the question of &amp;ldquo;Should pay be tied directly to productivity?&amp;rdquo; It's also a question that I wrestled with personally for about the first 10 years of my career. Indeed, it's part of the original reason I decided to became self employed back in 1989 and eventually founded my own company in 1996.</p>
<p><strong>The Intuitive Version of the Question</strong></p>
<p>I started my personal &amp;ldquo;10x compensation quest&amp;rdquo; from the point of view of, &amp;ldquo;I know I&amp;rsquo;m 3-5x as productive as the guy sitting next to me. Why am I not making 3-5x as much money?&amp;rdquo; Over a period of many years I found that this formulation of the question embodied several assumptions that were na&amp;iuml;ve or just plain wrong from a business perspective.</p>
<p><strong>Six Myths of 10x Compensation</strong></p>
<p>Let&amp;rsquo;s look at each of these myths of 10x compensation.</p>
<p><strong>Myth 1. The guy next to me is getting paid what he&amp;rsquo;s worth. </strong>If I&amp;rsquo;m really 5x as productive as the guy sitting next to me, part of that is that I&amp;rsquo;m really productive, and part of that is the guy next to me is <em>not </em>very productive. Let&amp;rsquo;s say that we&amp;rsquo;re both first-year programmers and both making $65,000 (i.e., pretty typical first-year programmer comp in major markets these days). Me being 5x as productive as the other does not mean I should be making 5 * $65,000. It probably means something more like the other guy should be making $20,000 and I should be making $100,000. Part of the issue is that I&amp;rsquo;m underpaid a little; a bigger part of the issue is the other guy is overpaid <em>a lot</em>.</p>
<p>My personal observation is that average company has something like 20% of its programmers that aren&amp;rsquo;t contributing anything meaningful to the business and whose compensation should really be <em>zero</em>. In many companies, star performers&amp;rsquo; low compensation is essentially subsidizing poor performers&amp;rsquo; salaries.</p>
<p>Some people think, &amp;ldquo;If I&amp;rsquo;m a 10x programmer, I should be making 10x the average compensation.&amp;rdquo; But the 10x ratio is not 10x from best to <em>average</em>; it&amp;rsquo;s 10x from best to <em>worst</em>. If you think you should be making 10x what the worst programmers make, and the worst programmers should be making <em>nothing</em>, be careful what you wish for!</p>
<p><strong>Myth 2. &amp;ldquo;Programming productivity&amp;rdquo; = &amp;ldquo;value to the business.&amp;rdquo;</strong> When someone says, &amp;ldquo;I&amp;rsquo;m 10x as good a programmer, therefore I should be paid 10x as much,&amp;rdquo; they&amp;rsquo;re assuming that their value to the business is based on their programming capability/contribution. That is part of the story, but not the whole story. Some mediocre programmers might be better at interacting with customers. Some might have better potential to move into management. Some might have less personal output but a wonderfully positive influence on overall team output. There are lots of other factors that influence &amp;ldquo;value to the business&amp;rdquo; besides raw programming output.</p>
<p><strong>Myth 3. High output should be rewarded with high salary. </strong>What&amp;rsquo;s mythical about this statement depends on understanding the difference between salary and compensation. When a business sets a salary (as opposed to a bonus &amp;ndash; i.e., &amp;ldquo;fixed comp&amp;rdquo; vs. &amp;ldquo;variable comp&amp;rdquo;), the business is recognizing a person&amp;rsquo;s current contribution to the business, and it&amp;rsquo;s also making a calculated bet about the person&amp;rsquo;s contribution to the business in the future and over time. If I&amp;rsquo;m 5x as productive as the next guy this year, there&amp;rsquo;s no guarantee that I&amp;rsquo;ll be 5x as productive again next year. My motivation on the next project could be lower. I could be distracted by new girlfriend, new wife, new baby, parent&amp;rsquo;s health issues, personal health issues, new release of Call of Duty, etc. Most businesses won&amp;rsquo;t lower salaries except in extraordinary circumstances, so businesses are very conservative about increasing their employees salaries.</p>
<p>The same basic reasoning applies to salary offers to new employees. If I&amp;rsquo;m looking at a guy with a 20 year track record of uninterrupted outstanding performance, I&amp;rsquo;ll make one kind of a bet about his future productivity when I offer him a salary. If I&amp;rsquo;m looking at a guy with a 2 year track record, no matter how outstanding those 2 years have been, I&amp;rsquo;ll make a different kind of bet about his future productivity when I offer him a salary.</p>
<p>These issues are related to rewarding output with high salaries. Rewarding high output with high bonuses brings up different issues.</p>
<p><strong>Myth 4. Businesses try to pay people based on what they&amp;rsquo;re worth to the business. </strong>This is true only in the most approximate sense. I used to think that the ideal business would go through the thought process of, &amp;ldquo;This person is contributing $Y in value to our business, so we can pay them some fraction of Y and still make a profit.&amp;rdquo; That isn&amp;rsquo;t how businesses work. In my experience, businesses don&amp;rsquo;t make any attempt whatsoever to figure out on a person-by-person basis how much each person contributes to the bottom line. <em>At best</em>, a business might go through an exercise of defining how much each <em>job </em>is worth (not each person) &amp;ndash; but those exercises don&amp;rsquo;t account for whether the person in each job is a 1x performer or a 10x performer. For the reason, any analysis of &amp;ldquo;this job is worth Y&amp;rdquo; without considering the level of performance of the person doing the job is a meaningless exercise.</p>
<p>Since businesses almost never know what a specific person doing a specific job is worth, <em>businesses generally pay people based on their market value, not on any calculation of their monetary contribution to the business. </em>Businesses pay people whatever they need to pay them in order to attract the people they want to attract and retain the people they want to retain. Businesses aren&amp;rsquo;t going to pay any more than they have to to fill any particular job, and so they&amp;rsquo;re not going to pay above market if they don&amp;rsquo;t have to. If a business can attract and retain a 10x developer for a 2x salary, that&amp;rsquo;s what it will do.</p>
<p>As McBreen pointed out, the market salary structure for programmers is relatively flat. In the local executive discussion group I host, earlier this month we discussed &amp;ldquo;Job Market 2011,&amp;rdquo; including compensation issues. In our area (Seattle), starting salaries are running about $50-$65K for people with less than 1 year of experience. Top end salaries are in the $125-$150K range for senior, star technical performers with no management responsibilities. McBreen stated that he saw less than a 2x difference in compensation in his area. The situation is actually worse than he described. In our area, there is approximately 2.5x difference <em>in the total range</em>, including <em>from most junior to most senior</em>.</p>
<p><strong>Myth 5. If a business wanted to pay based on productivity, it could measure individual productivity meaningfully enough to support its compensation decisions. </strong>As I discussed in a <a title="blog post in early 2008" href="https://www.construx.com/10x_Software_Development/Measuring_Productivity_of_Individual_Programmers/">blog post in early 2008</a>, measuring a 10x productivity difference in a research setting is one thing. Measuring productivity of specific individuals in a live production environment, on an ongoing basis, is a totally different challenge. The research measurement is possible and practical and has been done several times. The live, on-the-job measurement is subject to numerous &amp;ldquo;measurement error&amp;rdquo; issues that in my view make such measurements extremely impractical, if not downright impossible.</p>
<p>My first job after college I worked as a programmer at a company that tried to tie pay to productivity. We had a "billing hour bonus." There was a formula for calculating the bonus, and there were lots of anomalies in the formula. To get over the anomalies, the boss tweaked the formula almost every month. By the time I left that company there were 17 variables in the formula and almost all the programmers thought it was a joke. It mostly rewarded one guy who liked to work long hours, even though we all knew he was the least productive person there (both in terms of individual contribution and in terms of impact on others).</p>
<p>If we can&amp;rsquo;t meaningfully measure differences in performance, we&amp;rsquo;re left with more subjective assessments of programmers&amp;rsquo; contributions to the business, which actually is how most businesses operate.</p>
<p><strong>Myth 6. Companies don&amp;rsquo;t adjust pay for differences in productivity. </strong>Good companies do try to recognize differences in productivity. They have technical ladders that parallel their management ladders so that really good technical people can make salaries comparable to managers. Good companies attempt to pay based on very rough approximations of productivity over time. That's what different pay grade levels are for, and that's what performance reviews are for.</p>
<p>Your reaction to that is might be something like, "Yeah right. Performance reviews. Those are a joke. My boss doesn't really have any idea what I'm doing. I usually write my own review, or my boss just spouts generalities." That's right. Those are common problems with performance reviews. And with that common experience, why would anyone think that "measuring productivity" would be any more reliable than that? We have decades of experience via performance reviews that says it wouldn't be.</p>
<p><strong>In Summary, Is All This &amp;ldquo;Right,&amp;rdquo; or is This Just the Way it Is?</strong></p>
<p>I think it&amp;rsquo;s a little of both. I agree with the ideal of matching total compensation (not salary) to performance. But implementing that in practice is terribly challenging. The only practical way I can think of to truly tie pay to performance would be to move all employees to a contractor model and then pay them for well-defined pieces of work on a contract basis, with defined acceptance criteria and so on. I don't think that's practical, though, and it would undermine collaborative approaches like Scrum and also create some teamwork dynamics that I personally would rather not deal with. My overall conclusion is that paying for productivity on any more than a very-rough-approximation basis is a panacea that cannot practically be achieved.</p>
<p>As I&amp;rsquo;ve commented previously, the discrepancy between capability differences and compensation differences does create opportunities for companies that are willing to hire from the top of the talent pool to receive disproportionately greater levels of output in exchange for only modestly higher compensation.</p>
<p>This is not the answer I expected to find when I began asking the question almost 25 years ago, but I can see the reasons for it. Gerald Weinberg describes a pattern he describes as "Things are the way they are because they got that way." I think this is one of those cases.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Upcoming_Free_Webinar__A_Technical_Debt_Roadmap/?blogid=23485">
  <title>Upcoming Free Webinar: A Technical Debt Roadmap</title>
  <link>https://www.construx.com/10x_Software_Development/Upcoming_Free_Webinar__A_Technical_Debt_Roadmap/?blogid=23485</link>
  <description><![CDATA[<p>I’m excited about the webinar I’ll be leading on “A Technical Debt Roadmap.” It’s Tuesday, January 25, at 11:00 am Pacific Time. <a href="https://bzmediaevents.webex.com/bzmediaevents/onstage/g.php?t=a&amp;d=660202491&amp;SourceId=construx">Check it out</a>. </p>
<p>Here’s a description:</p>
<blockquote><p>"Technical Debt" refers to delayed technical work that is incurred when technical short cuts are taken, usually in pursuit of calendar-driven software schedules. <br /><br />Technical debt is inherently neither good nor bad. Just like financial debt, some technical debts can serve valuable business purposes. Other technical debts are simply counterproductive. However, just as with financial debt, it"s important to know what you"re getting into. <br /><br />In this one-hour webinar, Steve McConnell explains in detail the different types of technical debt, when organizations should and shouldn"t take them on, and best practices in managing, tracking and paying down debt. You"ll gain insights into how to use technical debt strategically and how to keep technical and business staff involved in the process. Seats are limited, so <a href="https://bzmediaevents.webex.com/bzmediaevents/onstage/g.php?t=a&amp;d=660202491&amp;SourceId=construx">sign up</a> for this in-depth webinar today! </p>
</blockquote>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-01-21T10:56:53Z</dc:date>
  <content:encoded><![CDATA[<p>I’m excited about the webinar I’ll be leading on “A Technical Debt Roadmap.” It’s Tuesday, January 25, at 11:00 am Pacific Time. <a href="https://bzmediaevents.webex.com/bzmediaevents/onstage/g.php?t=a&amp;d=660202491&amp;SourceId=construx">Check it out</a>.</p>
<p>Here’s a description:</p>
<p>"Technical Debt" refers to delayed technical work that is incurred when technical short cuts are taken, usually in pursuit of calendar-driven software schedules. <br /><br />Technical debt is inherently neither good nor bad. Just like financial debt, some technical debts can serve valuable business purposes. Other technical debts are simply counterproductive. However, just as with financial debt, it's important to know what you're getting into. <br /><br />In this one-hour webinar, Steve McConnell explains in detail the different types of technical debt, when organizations should and shouldn't take them on, and best practices in managing, tracking and paying down debt. You'll gain insights into how to use technical debt strategically and how to keep technical and business staff involved in the process. Seats are limited, so <a href="https://bzmediaevents.webex.com/bzmediaevents/onstage/g.php?t=a&amp;d=660202491&amp;SourceId=construx">sign up</a> for this in-depth webinar today! </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Origins_of_10X_–_How_Valid_is_the_Underlying_Research_/?blogid=23485">
  <title>Origins of 10X – How Valid is the Underlying Research?</title>
  <link>https://www.construx.com/10x_Software_Development/Origins_of_10X_–_How_Valid_is_the_Underlying_Research_/?blogid=23485</link>
  <description><![CDATA[<p>I recently contributed a chapter to <em>Making Software </em>(Oram and Wilson, eds., O&amp;rsquo;Reilly, 2011).  The purpose of this edited collection of essays is to pull together research-based writing on software engineering. In essence, the purpose is to say, &amp;ldquo;What do we really know (<em>quantitatively based</em>), and what do we only kind of think we know (<em>subjectively based</em>)?&amp;rdquo; My chapter, &amp;ldquo;What Does 10X Mean?&amp;rdquo; is an edited version of my 2008 blog entry &amp;ldquo;<a href="http://blogs.construx.com/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx">Productivity Variations Among Developers and Teams: The Origin of &amp;lsquo;10x&amp;rsquo;</a>.&amp;rdquo; The chapter focuses on the research that supports the claim of 10-fold differences in productivity among programmers. </p>
<p>Laurent Bossavit published a critique of my blog entry on his company&amp;rsquo;s <a href="http://blog.institut-agile.fr/2010/11/folklore-ou-fait-scientifique-comment.html">website in French</a>. That critique was translated into English on a <a href="http://morendil.github.com/folklore.html">different website</a>. </p>
<p>The critique (or its English translation, anyway) is quite critical of the claim that programmer productivity varies by 10x, quite critical of the research foundation for that claim, and quite critical of me personally. The specific nature of the criticism gives me an opportunity to talk about about the state of research in software development, my approach to writing about software development, and to revisit the 10x issue, which is one of my favorite topics. </p>
<h3>The State of Software Engineering Research</h3>
<p>Bossavit&amp;rsquo;s criticism of my writing is notable for the fact that it cites my work, comments on some of the citations that my work cites, but doesn&amp;rsquo;t cite any other software-specific research of its own. </p>
<p>In marked contrast, while I was working on the early stages of <em>Code Complete, 1st Ed.</em>,  I read a paper by B. A. Sheil titled &amp;ldquo;The Psychological Study of Programming&amp;rdquo; (<em>Computing Surveys</em>, Vol. 13. No. 1, March 1981).  Sheil reviewed dozens of papers on programming issues with a specific eye toward the research methodologies used. The conclusion of Sheil&amp;rsquo;s paper was sobering. The programming studies he reviewed failed to control for variables carefully enough to meet research standards that would be needed for publication in other more established fields like psychology. The papers didn&amp;rsquo;t achieve levels of statistical significance good enough for publication in other fields either. In other words, the research foundation for software engineering (circa 1981) was poor. </p>
<p>One of the biggest issues identified was that studies didn&amp;rsquo;t control for differences in individual capabilities. Suppose you&amp;rsquo;ve got a new methodology you believe increases productivity and quality by 50%. If there are potential differences as large as 10x between individuals, the differences arising from individuals in any given study will drown out any differences you might want to attribute to a change in methodology. See Figure 1. </p>
<p><a href="http://blogs.construx.com/blogs/stevemcc/ProductivityVariation_3FBA81D7.jpg"><img title="ProductivityVariation" style="border-width: 0px; padding-top: 0px; padding-right: 0px; padding-left: 0px; display: inline; background-image: none;" alt="ProductivityVariation" src="http://blogs.construx.com/blogs/stevemcc/ProductivityVariation_thumb_1EC70F30.jpg" border="0" /></a></p>
<p><strong>Figure 1</strong></p>
<p>This is a very big deal because almost none of the research at the time I was working on <em>Code Complete</em> <em>1</em> controlled for this variable. For example, a study would have Programmer Group A read a sample of code formatted using Technique X and Programmer Group B read a sample of code formatted using Technique Y. If Group A was found to be 25% more productive than Group B, you don&amp;rsquo;t really know whether it&amp;rsquo;s because Technique X is better than Technique Y and is helping productivity, or whether it&amp;rsquo;s because Group A started out being <em>way </em>more productive than Group B and Technique X actually hurt Group A&amp;rsquo;s productivity. </p>
<p>Since Sheil&amp;rsquo;s paper in 1981, this methodological limitation has continued to show up in productivity claims about new software development practices. For example, in the early 2000s the &amp;ldquo;poster child&amp;rdquo; project for Extreme Programming was the Chrysler C3 project. Numerous claims were made for XP&amp;rsquo;s effectiveness based on the productivity of that project. I personally never accepted the claims for the effectiveness of the XP methodology based on the C3 project because that project included rock star programmers Kent Beck, Martin Fowler, and Ron Jeffries, all working on the same project. The productivity of any project those guys work on would be at the top end of the bar shown on the left of Figure 1. Those guys could do a project using batch mode processing and punch cards and still be more productive than 95% of the teams out there.  Any methodological variations of 1x or 2x due to XP (or &amp;ndash;1x or &amp;ndash;2x) would be drowned out by the variation arising from C3&amp;rsquo;s exceptional personnel. In other words, considering the exceptional talent on the C3 project, it was impossible to tell whether the C3 project&amp;rsquo;s results were <em>because of</em> XP&amp;rsquo;s practices or <em>in spite of </em>XP&amp;rsquo;s practices. </p>
<h3>My Decision About How to Write <em>Code Complete</em></h3>
<p>Bringing this all back to <em>Code Complete 1</em>, I hit a point early in the writing of <em>Code Complete 1</em> where I was aware of Sheil&amp;rsquo;s research, aware of the limitations of many of the studies I was using, and trying to decide what kind of book I wanted to write. </p>
<p>The first argument I had with myself was how much weight to put on all the studies I had read. I read about 600 books and articles as background for <em>Code Complete</em>. Was I going to discard them altogether? I decided, No. The studies might not be <em>conclusive</em>, but many of them were surely <em>suggestive</em>. The book was being written by me and ultimately reflected my judgment, so whether the studies were conclusive or suggestive, my role as author was the same &amp;ndash; separate the wheat from the chaff and present my personal conclusions. (There was quite a lot of chaff. Of the 600 books and articles I read, only about half made it into the bibliography. <em>Code Complete&amp;rsquo;s </em>bibliography includes only those 300 books and articles that were cited somewhere in the book.)</p>
<p>The second argument I had with myself was how much detail to provide about the studies I cited. The academic side of me argued that every time I cited a study I should explain the limitations of the study. The pragmatic side of me argued that <em>Code Complete</em> wasn&amp;rsquo;t supposed to be an academic book; it was supposed to be a practical book. If I went into detail about every study I cited, the book would be 3x as long without adding any practical value for its readers. </p>
<p>In the end I felt that detailed citations and elaborate explanations of of each study would detract from the main focus of the book. So I settled on a citation style in which I cited (Author, Year) keyed to fuller bibliographic citations in the bibliography. I figured readers who wanted more academic detail could follow up on the citations themselves. </p>
<h3>A Deeper Dive Into the Research Supporting &amp;ldquo;10x&amp;rdquo;</h3>
<p>After settling on that approach with <em>Code Complete 1 </em>(back in 1991) I&amp;rsquo;ve continued to use that approach in most of the rest of my writing, including in the chapter I contributed to <em>Making Software</em>. </p>
<p>One limitation of my approach has been that, with my terse citation style, someone who is motivated enough to follow up on the citations might not be able to find the part of the book or article that I was citing, or might not understand the specific way in which the material I cited supports the point I&amp;rsquo;m making. That appears to have been the case with Laurent Bossavit&amp;rsquo;s critique of my &amp;ldquo;10x&amp;rdquo; explanation. </p>
<p>Bossavit goes point by point through my citations and was not able to find the support for the claim of 10x differences in productivity. Let&amp;rsquo;s follow the same path and fill in the blanks. </p>
<p><strong>Sackman, Erickson, and Grant, 1968. </strong>Here is my summary of the first research to find 10x differences in programmer productivity:</p>
<blockquote><p>Detailed examination of Sackman, Erickson, and Grant"s findings shows some  flaws in their methodology (including combining results from programmers working in low level programming languages with those working in high level programming languages). However, even after accounting for the flaws, their data still shows more than a 10-fold difference between the best programmers and the worst. </p>
<p>In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).</p>
</blockquote>
<p>The research on variations among individual programmers began with Sackman, Erickson, and Grant&amp;rsquo;s study published in 1968. Bossavit states that the 1968 study focused only on debugging, but that is not correct. As I stated in my blog article, the ratio of initial coding time between the best and worst programmers was about 20:1. The difference in program sizes was about 5:1. The difference in debugging was the most dramatic difference, at about 25:1, but it was not the only area in which differences were found. Differences found in coding time, debugging time, and program size all support a general claim of &amp;ldquo;order of magnitude&amp;rdquo; differences in productivity, i.e., a 10x difference. </p>
<p>An interesting historical footnote is that Sackman, Erickson, and Grant did not set out to show a 10x or 25x difference in productivity among programmers. The purpose of their research was to determine whether programming online offered any real productivity advantage compared to programming offline. What they discovered, to their surprise, was that, ala Figure 1, any difference in online vs. offline productivity was drowned out by the productivity differences among individuals. The factor they set out to study would be irrelevant today. The conclusion they stumbled onto by accident is one that we&amp;rsquo;re still talking about. </p>
<p><strong>Curtis 1981. </strong>Bossavit criticizes my (Curtis 1981) citation by stating </p>
<blockquote><p>The 1981 Curtis study included 60 programmers, which once again were dealing with a debugging rather than a programming task.</p>
</blockquote>
<p>I do not know why he thinks this statement is a criticism of the Curtis study. In my corner of the world debugging is not the only programming task, but it certainly is an essential programming task, and everyone knows that. The Curtis article concludes that, &amp;ldquo;a statement such as &amp;lsquo;order of magnitude differences in the performance of individual programmers&amp;rsquo; seems justified.&amp;rdquo; The (Curtis 1981) citation directly supports the 10x claim&amp;mdash;almost word for word. </p>
<p><strong>Curtis 1986. </strong>Moving to the next citation, Bossavit states that, &amp;ldquo;the 1986 Curtis article does not report on an empirical study.&amp;rdquo; I never stated that Curtis 1986 was an &amp;ldquo;empirical study.&amp;rdquo; Curtis 1986 is a broad paper that touches on, among other things, differences in programmer productivity. Bossavit says the paper &amp;ldquo;offers no support for the &amp;lsquo;10x&amp;rsquo; claim.&amp;rdquo; But the first paragraph in section II.A. of the paper (p. 1093) summarizes 4 studies with the overall gist of the studies being that there are very large differences in productivity among programmers. The specific numbers cited are 28:1 and 23:1 differences. Clearly that again offers direct support for the 10x claim. </p>
<p><strong>Mills 1983. </strong>The &amp;ldquo;Mills 1983&amp;rdquo; citation is to a book by Harlan Mills titled <em>Software Productivity </em>in which Mills cites 10:1 differences in productivity not just among individuals but also among teams. As Bossavit points out, the Mills book contains &amp;ldquo;experience reports,&amp;rdquo; among other things. Apparently Bossavit doesn&amp;rsquo;t consider an &amp;ldquo;experience report&amp;rdquo; to be a &amp;ldquo;study,&amp;rdquo; but I do, which is why I cited Mills&amp;rsquo; 1983 book. </p>
<p><strong>DeMarco and Lister 1985. </strong>Bossavit misreads my citation of DeMarco and Lister 1985, assuming it refers to their classic book <em>Peopleware</em>. That is a natural assumption, but as I stated clearly in the article&amp;rsquo;s bibliography, the reference was to their paper titled, "Programmer Performance and the Effects of the Workplace" which was published a couple years before <em>PeopleWare</em>. </p>
<p>Bossavit&amp;rsquo;s objection to this study is </p>
<blockquote><p>The only &amp;ldquo;studies&amp;rdquo; reported on therein are the programming contests organized by the authors, which took place under loosely controlled conditions (participants were to tackle the exercises at their workplace and concurrently with their work as professional programmers), making the results hardly dependable.</p>
</blockquote>
<p>Editorial insinuations aside, that is a correct description of what DeMarco and Lister reported, both in the paper I cited and in <em>Peopleware</em>. Their 1985 study had some of the methodological limitations Sheil&amp;rsquo;s discussed in 1981. Having said that, their study supports the 10x claim in spades and is <em>not</em> subject to many of the more common methodological weaknesses present in other software engineering studies. DeMarco and Lister reported results from 166 programmers, which is a much larger group than used in most studies. The programmers were working professionals rather than students, which is not always the case. The focus of the study was a complete programming assignment&amp;mdash;design, code, desk check, and for part of the group, test and debug. </p>
<p>The programmers in DeMarco and Lister&amp;rsquo;s study were trying to complete an assignment in their normal workplace. Bossavit seems to think that undermines the credibility of their research. I think it <em>enhances </em>the credibility of their research. Which do you trust more: results from a study in which programmers worked in a carefully controlled university environment, or results from a study in which programmers were subjected to all the day-to-day interruptions and distractions that programmers are subjected to in real life? Personally I put more weight on the study that more closely models real-world conditions, which is why I cited it. </p>
<p>As far as the 10x claim goes, Bossavit should have looked at the paper I cited, not the book. The paper shows a 5.6x difference between the best and worst programmers&amp;mdash;<em>among the programmers who finished the assignment</em>. About 10% of the programmers weren&amp;rsquo;t able to complete the assignment <em>at all</em>. That makes the difference between best and worst programmers essentially infinite &amp;ndash; and certainly supports the round-number claim of 10x differences from the best programmers to the worst. </p>
<p><strong>Card 1987. </strong>Bossavit says, </p>
<blockquote><p>The 1987 Card reference isn&amp;rsquo;t an academic publication but an executive report by a private research institution, wherein a few tables of figures appear, none of which seem to directly bear on the &amp;ldquo;10x&amp;rdquo; claim.</p>
</blockquote>
<p>The publication is an article in <em>Information and Software Technology</em>, which is &amp;ldquo;the international archival journal focusing on research and experience that contributes to the improvement of software development practices.&amp;rdquo; There is no basis for Bossavit to characterize Card&amp;rsquo;s journal article as an &amp;ldquo;executive report.&amp;rdquo; </p>
<p>Bossavit claims that none of the tables of figures &amp;ldquo;seem to directly bear on the &amp;lsquo;10x&amp;rsquo; claim.&amp;rdquo; But on p. 293 of the article, Figure 3, titled &amp;ldquo;Programmer productivity variations,&amp;rdquo; shows two graphs: a &amp;ldquo;large project&amp;rdquo; graph in which productivity ranges from 0.9 to 7.9 (a difference of  8.8x ), and a &amp;ldquo;small project&amp;rdquo; graph with a productivity range of 0.5 to 10.8 (a difference of 21.6x). These &amp;ldquo;programmer productivity variation&amp;rdquo; graphs support the 10x claim quite directly. </p>
<p><strong>Boehm and Papaccio 1988.</strong> I will acknowledge that this wasn&amp;rsquo;t the clearest citation for the underlying research I meant to refer to. I probably should have cited Boehm 1981 instead. In 1981, Barry Boehm published <em>Software Engineering Economics, </em>the first comprehensive description of the Cocomo estimation model. The adjustment factors for the model were derived through analysis of historical data. The model shows differences in team productivity based on programmer capability of 4.18 to 1. This is not quite an order of magnitude, but it is for teams, rather than for individuals, and generally supports the claim that &amp;ldquo;there are very large differences in capabilities between different individuals and teams.&amp;rdquo; </p>
<p><strong>Boehm 2000. </strong>Bossavit states that he did not look at this source. Boehm 2000 is <em>Software Cost Estimation with Cocomo II<em></em>, </em>the update of the Cocomo model that was originally described in Boehm 1981. In the 2000 update, the factors in the Cocomo model were calibrated using data from a database of about 100 projects. Cocomo II analyzes the effects of a number of personnel factors. According to Cocomo II, if you compare a team made up of top-tier programmers, experienced with the application, programming language, and platform they&amp;rsquo;re using, to a team made up of bottom tier programmers, inexperienced with the application, programming language, and platform they&amp;rsquo;re using, you can expect a difference of 5.3x in productivity. </p>
<p>The same conclusion applies here that applies to Boehm 1981: This is not quite an order of magnitude difference, but since it applies to teams rather than individuals, it generally supports the claim that &amp;ldquo;there are very large differences in capabilities between different individuals and teams.&amp;rdquo; It is also significant that, according to Cocomo II, the factors related to the personnel composing the team affect productivity more than any other factors. </p>
<p><strong>Valett and McGarry 1989. </strong>Valett and McGarry provide additional detail from the same data set used by Card 1987 and also cites individual differences ranging from 8.8x to 21.6x. Valett and McGarry&amp;rsquo;s conclusion is based on data from more than 150 individuals across 25 major projects and includes coding as well as debugging. Bossavit claims this study amounts to a &amp;ldquo;citation of a citation,&amp;rdquo; but I don&amp;rsquo;t know why he claims that. Valett and McGarry were both at the organization described in the study and directly involved in it. And the differences cited certainly support my general claim of 10x differences in productivity among programmers. </p>
<h3>Reaffirming: Strong Research Support for the 10x Conclusion</h3>
<p>To summarize, the claim that Bossavit doesn&amp;rsquo;t like, is this:</p>
<blockquote><p>The general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).</p>
</blockquote>
<p>As I reviewed these citations once again in writing this article, I concluded again that they support the general finding that there are 10x productivity differences among programmers. The studies have collectively involved hundreds of professional programmers across a spectrum of programming activities. Specific differences range from about 5:1 to about 25:1, and in my judgment that collectively supports the 10x claim. Moreover, the research finding is consistent with my experience, in which I have personally observed 10x differences (or more) between different programmers. I think one reason the 10x claim resonates with many people is that many other software professionals have observed 10x differences among programmers too. </p>
<p>Bossavit concludes his review of my blog entry / book chapter by saying this:</p>
<blockquote><p>What is happening here is not pretty. I&amp;rsquo;m not accusing McConnell here of being a bad person. I <strong>am</strong> claiming that for whatever reasons he is here dressing up, in the trappings of scientific discourse, what is in fact an unsupported assertion meshing well with his favored opinion. McConnell is abusing the mechanism of scientific citation to lend authority to a claim which derives it only from a couple studies which can be at best described as &amp;ldquo;exploratory&amp;rdquo; (and at worst, maybe, as &amp;ldquo;discredited&amp;rdquo;).</p>
</blockquote>
<p>Obviously I disagree with Bossavit&amp;rsquo;s conclusion. Saying he thinks there are methodological weaknesses in the studies I cited would be one kind of criticism that might contain a grain of truth. None of the studies are perfect, and we could have a constructive dialog about that. But that isn&amp;rsquo;t what he says. He says I am making &amp;ldquo;unsupported assertions&amp;rdquo; and &amp;ldquo;cheating with citations.&amp;rdquo; Those claims are extreme and unfounded. Bossavit seems to be aspiring to some academic ideal in which the only studies that can be cited are those that are methodologically pure in every respect. That&amp;rsquo;s a laudable ideal, but it would have the practical effect of restricting the universe of allowable software engineering studies to zero. </p>
<p>Having said that, the body of research that supports the 10x claim is as solid as any research that&amp;rsquo;s been done in software engineering. Studies that support the 10x claim are singularly <em>not </em>subject to the methodological limitation described in Figure 1, because they are studying individual variability itself (i.e., only the left side of the figure). Bossavit does not cite even one study&amp;mdash;flawed or otherwise&amp;mdash;that counters the 10x claim, and I haven&amp;rsquo;t seen any such studies either. The fact that no studies have produced findings that contradict the 10x claim provides even more confidence in the 10x claim. When I consider the number of studies that have been done, in aggregate I find the research to be not only suggestive, but conclusive&amp;mdash;which is rare in software engineering research.  </p>
<p>As for my writing style, even if people misunderstand what I&amp;rsquo;ve written from time to time, I plan to stand by my practical-focus-with-minimal-citations approach. I think most readers prefer the one paragraph summary with citations that I repeated at the top of this section to the two dozen paragraphs that academically dissect it. It&amp;rsquo;s interesting to go into that level of detail once in awhile, but not very often. </p>
<h3>References</h3>
<p>Boehm, Barry W., and Philip N. Papaccio. 1988. "Understanding and Controlling Software Costs." <em>IEEE Transactions on Software Engineering</em> SE-14, no. 10 (October): 1462-77.</p>
<p>Boehm, Barry, 1981. <em>Software Engineering Economics</em>, Boston, Mass.: Addison Wesley, 1981. </p>
<p>Boehm, Barry, et al, 2000. <em>Software Cost Estimation with Cocomo II</em>, Boston, Mass.: Addison Wesley, 2000. </p>
<p>Boehm, Barry W., T. E. Gray, and T. Seewaldt. 1984. "Prototyping Versus Specifying: A Multiproject Experiment." IEEE Transactions on Software Engineering SE-10, no. 3 (May): 290-303. Also in Jones 1986b.</p>
<p>Card, David N. 1987. "A Software Technology Evaluation Program." <em>Information and Software Technology</em> 29, no. 6 (July/August): 291-300.</p>
<p>Curtis, Bill. 1981. "Substantiating Programmer Variability." <em>Proceedings of the IEEE 69</em>, no. 7: 846.</p>
<p>Curtis, Bill, et al. 1986. "Software Psychology: The Need for an Interdisciplinary Program." <em>Proceedings of the IEEE 74</em>, no. 8: 1092-1106.</p>
<p>DeMarco, Tom, and Timothy Lister. 1985. "Programmer Performance and the Effects of the Workplace." <em>Proceedings of the 8th International Conference on Software Engineering</em>. Washington, D.C.: IEEE Computer Society Press, 268-72.</p>
<p>DeMarco, Tom and Timothy Lister, 1999. <em>Peopleware</em>: Productive Projects and Teams, 2d Ed. New York: Dorset House, 1999.</p>
<p>Mills, Harlan D. 1983. <em>Software Productivity</em>. Boston, Mass.: Little, Brown.</p>
<p>Sackman, H., W.J. Erikson, and E. E. Grant. 1968. "Exploratory Experimental Studies Comparing Online and Offline Programming Performance." <em>Communications of the ACM</em> 11, no. 1 (January): 3-11.</p>
<p>Sheil, B. A.  1981. &amp;ldquo;The Psychological Study of Programming,&amp;rdquo; <em>Computing Surveys</em>, Vol. 13. No. 1, March 1981.  </p>
<p>Valett, J., and F. E. McGarry. 1989. "A Summary of Software Measurement Experiences in the Software Engineering Laboratory." <em>Journal of Systems and Software</em> 9, no. 2 (February): 137-48.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2011-01-09T18:15:00Z</dc:date>
  <content:encoded><![CDATA[<p>I recently contributed a chapter to <em>Making Software </em>(Oram and Wilson, eds., O'Reilly, 2011). The purpose of this edited collection of essays is to pull together research-based writing on software engineering. In essence, the purpose is to say, "What do we really know (<em>quantitatively based</em>), and what do we only kind of think we know (<em>subjectively based</em>)?" My chapter, "What Does 10X Mean" is an edited version of my 2008 blog entry "<a href="https://www.construx.com/10x_Software_Development/Productivity_Variations_Among_Software_Developers_and_Teams__The_Origin_of_10x/">Productivity Variations Among Developers and Teams: The Origin of 10x</a>." The chapter focuses on the research that supports the claim of 10-fold differences in productivity among programmers.</p>
<p>Laurent Bossavit published a critique of my blog entry on his company's <a href="http://blog.institut-agile.fr/2010/11/folklore-ou-fait-scientifique-comment.html">website in French</a>. That critique was translated into English on a <a href="http://morendil.github.com/folklore.html">different website</a>.</p>
<p>The critique (or its English translation, anyway) is quite critical of the claim that programmer productivity varies by 10x, quite critical of the research foundation for that claim, and quite critical of me personally. The specific nature of the criticism gives me an opportunity to talk about the state of research in software development, my approach to writing about software development, and to revisit the 10x issue, which is one of my favorite topics.</p>
<p><strong>The State of Software Engineering Research</strong></p>
<p>Bossavit's criticism of my writing is notable for the fact that it cites my work, comments on some of the citations that my work cites, but doesn't cite any other software-specific research of its own.</p>
<p>In marked contrast, while I was working on the early stages of <em>Code Complete, 1st Ed.</em>, I read a paper by B. A. Sheil titled "The Psychological Study of Programming" (<em>Computing Surveys</em>, Vol. 13. No. 1, March 1981). Sheil reviewed dozens of papers on programming issues with a specific eye toward the research methodologies used. The conclusion of Sheil's paper was sobering. The programming studies he reviewed failed to control for variables carefully enough to meet research standards that would be needed for publication in other more established fields like psychology. The papers didn't achieve levels of statistical significance good enough for publication in other fields either. In other words, the research foundation for software engineering (circa 1981) was poor.</p>
<p>One of the biggest issues identified was that studies didn't control for differences in individual capabilities. Suppose you have a new methodology you believe increases productivity and quality by 50%. If there are potential differences as large as 10x between individuals, the differences arising from individuals in any given study will drown out any differences you might want to attribute to a change in methodology. See Figure 1.</p>
<p><img width="401" height="388" title="ProductivityVariation" alt="ProductivityVariation" src="https://www.construx.com/uploadedimages/ProductivityVariation_thumb_1EC70F30.jpg" /></p>
<p><strong>Figure 1</strong></p>
<p>This is a very big deal because almost none of the research at the time I was working on <em>Code Complete</em> <em>1</em> controlled for this variable. For example, a study would have Programmer Group A read a sample of code formatted using Technique X and Programmer Group B read a sample of code formatted using Technique Y. If Group A was found to be 25% more productive than Group B, you don't really know whether it's because Technique X is better than Technique Y and is helping productivity, or whether it's because Group A started out being <em>way </em>more productive than Group B and Technique X actually hurt Group A's productivity.</p>
<p>Since Sheil's paper in 1981, this methodological limitation has continued to show up in productivity claims about new software development practices. For example, in the early 2000s the "poster child" project for Extreme Programming was the Chrysler C3 project. Numerous claims were made for XP's effectiveness based on the productivity of that project. I personally never accepted the claims for the effectiveness of the XP methodology based on the C3 project because that project included rock star programmers Kent Beck, Martin Fowler, and Ron Jeffries, all working on the same project. The productivity of any project those guys work on would be at the top end of the bar shown on the left of Figure 1. Those guys could do a project using batch mode processing and punch cards and still be more productive than 95% of the teams out there. Any methodological variations of 1x or 2x due to XP (or -1x or -2x) would be drowned out by the variation arising from C3's exceptional personnel. In other words, considering the exceptional talent on the C3 project, it was impossible to tell whether the C3 project's results were <em>because of</em> XP's practices or <em>in spite of </em>XP's practices.</p>
<p><strong>My Decision About How to Write <em>Code Complete</em></strong></p>
<p>Bringing this all back to <em>Code Complete 1</em>, I hit a point early in the writing of <em>Code Complete 1</em> where I was aware of Sheil's research, aware of the limitations of many of the studies I was using, and trying to decide what kind of book I wanted to write.</p>
<p>The first argument I had with myself was how much weight to put on all the studies I had read. I read about 600 books and articles as background for <em>Code Complete</em>. Was I going to discard them altogether? I decided, No. The studies might not be <em>conclusive</em>, but many of them were surely <em>suggestive</em>. The book was being written by me and ultimately reflected my judgment, so whether the studies were conclusive or suggestive, my role as author was the same--separate the wheat from the chaff and present my personal conclusions. (There was quite a lot of chaff. Of the 600 books and articles I read, only about half made it into the bibliography. <em>Code Complete's </em>bibliography includes only those 300 books and articles that were cited somewhere in the book.)</p>
<p>The second argument I had with myself was how much detail to provide about the studies I cited. The academic side of me argued that every time I cited a study I should explain the limitations of the study. The pragmatic side of me argued that <em>Code Complete</em> wasn't supposed to be an academic book; it was supposed to be a practical book. If I went into detail about every study I cited, the book would be 3x as long without adding any practical value for its readers.</p>
<p>In the end I felt that detailed citations and elaborate explanations of each study would detract from the main focus of the book. So I settled on a citation style in which I cited (Author, Year) keyed to fuller bibliographic citations in the bibliography. I figured readers who wanted more academic detail could follow up on the citations themselves.</p>
<p><strong>A Deeper Dive Into the Research Supporting "10x"</strong></p>
<p>After settling on that approach with <em>Code Complete 1 </em>(back in 1991) I've continued to use that approach in most of the rest of my writing, including in the chapter I contributed to <em>Making Software</em>.</p>
<p>One limitation of my approach has been that, with my terse citation style, someone who is motivated enough to follow up on the citations might not be able to find the part of the book or article that I was citing, or might not understand the specific way in which the material I cited supports the point I'm making. That appears to have been the case with Laurent Bossavit's critique of my "10x" explanation.</p>
<p>Bossavit goes point by point through my citations and was not able to find the support for the claim of 10x differences in productivity. Let's follow the same path and fill in the blanks.</p>
<p><strong>Sackman, Erickson, and Grant, 1968. </strong>Here is my summary of the first research to find 10x differences in programmer productivity:</p>
<blockquote><p><em>Detailed examination of Sackman, Erickson, and Grant's findings shows some flaws in their methodology (including combining results from programmers working in low level programming languages with those working in high level programming languages). However, even after accounting for the flaws, their data still shows more than a 10-fold difference between the best programmers and the worst.</em></p>
<em></em></blockquote>
<em></em><blockquote><p><em>In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).</em></p>
</blockquote>
<p>The research on variations among individual programmers began with Sackman, Erickson, and Grant's study published in 1968. Bossavit states that the 1968 study focused only on debugging, but that is not correct. As I stated in my blog article, the ratio of initial coding time between the best and worst programmers was about 20:1. The difference in program sizes was about 5:1. The difference in debugging was the most dramatic difference, at about 25:1, but it was not the only area in which differences were found. Differences found in coding time, debugging time, and program size all support a general claim of "order of magnitude" differences in productivity, i.e., a 10x difference.</p>
<p>An interesting historical footnote is that Sackman, Erickson, and Grant did not set out to show a 10x or 25x difference in productivity among programmers. The purpose of their research was to determine whether programming online offered any real productivity advantage compared to programming offline. What they discovered, to their surprise, was that, ala Figure 1, any difference in online vs. offline productivity was drowned out by the productivity differences among individuals. The factor they set out to study would be irrelevant today. The conclusion they stumbled onto by accident is one that we're still talking about.</p>
<p><strong>Curtis 1981. </strong>Bossavit criticizes my (Curtis 1981) citation by stating</p>
<p>The 1981 Curtis study included 60 programmers, which once again were dealing with a debugging rather than a programming task.</p>
<p>I do not know why he thinks this statement is a criticism of the Curtis study. In my corner of the world debugging is not the only programming task, but it certainly is an essential programming task, and everyone knows that. The Curtis article concludes that, "a statement such as 'order of magnitude differences in the performance of individual programmers' seems justified." The (Curtis 1981) citation directly supports the 10x claim--almost word for word.</p>
<p><strong>Curtis 1986. </strong>Moving to the next citation, Bossavit states that, "the 1986 Curtis article does not report on an empirical study." I never stated that Curtis 1986 was an "empirical study." Curtis 1986 is a broad paper that touches on, among other things, differences in programmer productivity. Bossavit says the paper "offers no support for the '10x' claim." But the first paragraph in section II.A. of the paper (p. 1093) summarizes 4 studies with the overall gist of the studies being that there are very large differences in productivity among programmers. The specific numbers cited are 28:1 and 23:1 differences. Clearly that again offers direct support for the 10x claim.</p>
<p><strong>Mills 1983. </strong>The "Mills 1983" citation is to a book by Harlan Mills titled <em>Software Productivity </em>in which Mills cites 10:1 differences in productivity not just among individuals but also among teams. As Bossavit points out, the Mills book contains "experience reports," among other things. Apparently Bossavit doesn't consider an "experience report" to be a "study," but I do, which is why I cited Mills' 1983 book.</p>
<p><strong>DeMarco and Lister 1985. </strong>Bossavit misreads my citation of DeMarco and Lister 1985, assuming it refers to their classic book <em>Peopleware</em>. That is a natural assumption, but as I stated clearly in the article's bibliography, the reference was to their paper titled, "Programmer Performance and the Effects of the Workplace" which was published a couple years before <em>PeopleWare</em>.</p>
<p>Bossavit's objection to this study is</p>
<blockquote><p><em>The only "studies" reported on therein are the programming contests organized by the authors, which took place under loosely controlled conditions (participants were to tackle the exercises at their workplace and concurrently with their work as professional programmers), making the results hardly dependable.</em></p>
</blockquote>
<p>Editorial insinuations aside, that is a correct description of what DeMarco and Lister reported, both in the paper I cited and in <em>Peopleware</em>. Their 1985 study had some of the methodological limitations Sheil's discussed in 1981. Having said that, their study supports the 10x claim in spades and is <em>not</em> subject to many of the more common methodological weaknesses present in other software engineering studies. DeMarco and Lister reported results from 166 programmers, which is a much larger group than used in most studies. The programmers were working professionals rather than students, which is not always the case. The focus of the study was a complete programming assignment--design, code, desk check, and for part of the group, test and debug.</p>
<p>The programmers in DeMarco and Lister's study were trying to complete an assignment in their normal workplace. Bossavit seems to think that undermines the credibility of their research. I think it <em>enhances </em>the credibility of their research. Which do you trust more: results from a study in which programmers worked in a carefully controlled university environment, or results from a study in which programmers were subjected to all the day-to-day interruptions and distractions that programmers are subjected to in real life? Personally I put more weight on the study that more closely models real-world conditions, which is why I cited it.</p>
<p>As far as the 10x claim goes, Bossavit should have looked at the paper I cited rather than the book. The paper shows a 5.6x difference between the best and worst programmers--<em>among the programmers who finished the assignment</em>. About 10% of the programmers weren't able to complete the assignment <em>at all</em>. That makes the difference between best and worst programmers essentially infinite - and certainly supports the round-number claim of 10x differences from the best programmers to the worst.</p>
<p><strong>Card 1987. </strong>Bossavit says,</p>
<blockquote><p><em>The 1987 Card reference isn't an academic publication but an executive report by a private research institution, wherein a few tables of figures appear, none of which seem to directly bear on the "10x" claim.</em></p>
</blockquote>
<p>The publication is an article in <em>Information and Software Technology</em>, which is "the international archival journal focusing on research and experience that contributes to the improvement of software development practices." There is no basis for Bossavit to characterize Card's journal article as an "executive report."</p>
<p>Bossavit claims that none of the tables of figures "seem to directly bear on the '10x' claim." But on p. 293 of the article, Figure 3, titled "Programmer productivity variations," shows two graphs: a "large project" graph in which productivity ranges from 0.9 to 7.9 (a difference of 8.8x ), and a "small project" graph with a productivity range of 0.5 to 10.8 (a difference of 21.6x). These "programmer productivity variation" graphs support the 10x claim quite directly.</p>
<p><strong>Boehm and Papaccio 1988.</strong> I will acknowledge that this wasn't the clearest citation for the underlying research I meant to refer to. I probably should have cited Boehm 1981 instead. In 1981, Barry Boehm published <em>Software Engineering Economics, </em>the first comprehensive description of the Cocomo estimation model. The adjustment factors for the model were derived through analysis of historical data. The model shows differences in team productivity based on programmer capability of 4.18 to 1. This is not quite an order of magnitude, but it is for teams, rather than for individuals, and generally supports the claim that "there are very large differences in capabilities between different individuals and teams."</p>
<p><strong>Boehm 2000. </strong>Bossavit states that he did not look at this source. Boehm 2000 is <em>Software Cost Estimation with Cocomo II<em></em>, </em>the update of the Cocomo model that was originally described in Boehm 1981. In the 2000 update, the factors in the Cocomo model were calibrated using data from a database of about 100 projects. Cocomo II analyzes the effects of a number of personnel factors. According to Cocomo II, if you compare a team made up of top-tier programmers, experienced with the application, programming language, and platform they're using, to a team made up of bottom tier programmers, inexperienced with the application, programming language, and platform they're using, you can expect a difference of 5.3x in productivity.</p>
<p>The same conclusion applies here that applies to Boehm 1981: This is not quite an order of magnitude difference, but since it applies to teams rather than individuals, it generally supports the claim that "there are very large differences in capabilities between different individuals and teams." It is also significant that, according to Cocomo II, the factors related to the personnel composing the team affect productivity more than any other factors.</p>
<p><strong>Valett and McGarry 1989. </strong>Valett and McGarry provide additional detail from the same data set used by Card 1987 and also cites individual differences ranging from 8.8x to 21.6x. Valett and McGarry's conclusion is based on data from more than 150 individuals across 25 major projects and includes coding as well as debugging. Bossavit claims this study amounts to a "citation of a citation," but I don't know why he claims that. Valett and McGarry were both at the organization described in the study and directly involved in it. And the differences cited certainly support my general claim of 10x differences in productivity among programmers.</p>
<p><strong>Reaffirming: Strong Research Support for the 10x Conclusion</strong></p>
<p>To summarize, the claim that Bossavit doesn't like, is this:</p>
<blockquote><p><em>The general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).</em></p>
</blockquote>
<p>As I reviewed these citations once again in writing this article, I concluded again that they support the general finding that there are 10x productivity differences among programmers. The studies have collectively involved hundreds of professional programmers across a spectrum of programming activities. Specific differences range from about 5:1 to about 25:1, and in my judgment that collectively supports the 10x claim. Moreover, the research finding is consistent with my experience, in which I have personally observed 10x differences (or more) between different programmers. I think one reason the 10x claim resonates with many people is that many other software professionals have observed 10x differences among programmers too.</p>
<p>Bossavit concludes his review of my blog entry / book chapter by saying this:</p>
<blockquote><p><em>What is happening here is not pretty. I'm not accusing McConnell here of being a bad person. I <strong>am</strong> claiming that for whatever reasons he is here dressing up, in the trappings of scientific discourse, what is in fact an unsupported assertion meshing well with his favored opinion. McConnell is abusing the mechanism of scientific citation to lend authority to a claim which derives it only from a couple studies which can be at best described as "exploratory" (and at worst, maybe, as "discredited").</em></p>
</blockquote>
<p>Obviously I disagree with Bossavit's conclusion. Saying he thinks there are methodological weaknesses in the studies I cited would be one kind of criticism that might contain a grain of truth. None of the studies are perfect, and we could have a constructive dialog about that. But that isn't what he says. He says I am making "unsupported assertions" and "cheating with citations." Those claims are unfounded. Bossavit seems to be aspiring to some academic ideal in which the only studies that can be cited are those that are methodologically pure in every respect. That's a laudable ideal, but it would have the practical effect of restricting the universe of allowable software engineering studies to zero.</p>
<p>Having said that, the body of research that supports the 10x claim is as solid as any research that's been done in software engineering. Studies that support the 10x claim are singularly <em>not </em>subject to the methodological limitation described in Figure 1, because they are studying individual variability itself (i.e., only the left side of the figure). Bossavit does not cite even one study--flawed or otherwise--that counters the 10x claim, and I haven't seen any such studies either. The fact that no studies have produced findings that contradict the 10x claim provides even more confidence in the 10x claim. When I consider the number of studies that have been conducted, in aggregate I find the research to be not only suggestive, but conclusive--which is rare in software engineering research.</p>
<p>As for my writing style, even if people misunderstand what I've written from time to time, I plan to stand by my practical-focus-with-minimal-citations approach. I think most readers prefer the one paragraph summary with citations that I repeated at the top of this section to the two dozen paragraphs that academically dissect it. It's interesting to go into that level of detail once in awhile, but not very often.</p>
<p><strong>References</strong></p>
<p>Boehm, Barry W., and Philip N. Papaccio. 1988. "Understanding and Controlling Software Costs." <em>IEEE Transactions on Software Engineering</em> SE-14, no. 10 (October): 1462-77.</p>
<p>Boehm, Barry, 1981. <em>Software Engineering Economics</em>, Boston, Mass.: Addison Wesley, 1981.</p>
<p>Boehm, Barry, et al, 2000. <em>Software Cost Estimation with Cocomo II</em>, Boston, Mass.: Addison Wesley, 2000.</p>
<p>Boehm, Barry W., T. E. Gray, and T. Seewaldt. 1984. "Prototyping Versus Specifying: A Multiproject Experiment." IEEE Transactions on Software Engineering SE-10, no. 3 (May): 290-303. Also in Jones 1986b.</p>
<p>Card, David N. 1987. "A Software Technology Evaluation Program." <em>Information and Software Technology</em> 29, no. 6 (July/August): 291-300.</p>
<p>Curtis, Bill. 1981. "Substantiating Programmer Variability." <em>Proceedings of the IEEE 69</em>, no. 7: 846.</p>
<p>Curtis, Bill, et al. 1986. "Software Psychology: The Need for an Interdisciplinary Program." <em>Proceedings of the IEEE 74</em>, no. 8: 1092-1106.</p>
<p>DeMarco, Tom, and Timothy Lister. 1985. "Programmer Performance and the Effects of the Workplace." <em>Proceedings of the 8th International Conference on Software Engineering</em>. Washington, D.C.: IEEE Computer Society Press, 268-72.</p>
<p>DeMarco, Tom and Timothy Lister, 1999. <em>Peopleware</em>: Productive Projects and Teams, 2d Ed. New York: Dorset House, 1999.</p>
<p>Mills, Harlan D. 1983. <em>Software Productivity</em>. Boston, Mass.: Little, Brown.</p>
<p>Sackman, H., W.J. Erikson, and E. E. Grant. 1968. "Exploratory Experimental Studies Comparing Online and Offline Programming Performance." <em>Communications of the ACM</em> 11, no. 1 (January): 3-11.</p>
<p>Sheil, B. A. 1981. "The Psychological Study of Programming," <em>Computing Surveys</em>, Vol. 13. No. 1, March 1981.</p>
<p>Valett, J., and F. E. McGarry. 1989. "A Summary of Software Measurement Experiences in the Software Engineering Laboratory." <em>Journal of Systems and Software</em> 9, no. 2 (February): 137-48.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Construx_Job_Opening__Software_Development_Trainer-Consultant/?blogid=23485">
  <title>Construx Job Opening: Software Development Trainer-Consultant</title>
  <link>https://www.construx.com/10x_Software_Development/Construx_Job_Opening__Software_Development_Trainer-Consultant/?blogid=23485</link>
  <description><![CDATA[<p>Construx is looking for a trainer/consultant. Construx has a fantastic staff and unmatched benefits. For the well qualified person who wants to do excellent work in a highly stimulating environment, it is a dream job -- which is why we"ve been recognized as the <b>Best Small Company to Work for in Washington State </b>twice. </p>
<p>Here"s the official job posting:</p>
<hr />
<p>Construx is seeking an experienced software engineer to provide training and consulting services with an initial emphasis on training ("Technical Service Provider (TSP)"). Deep software development experience is required, as is broad and deep knowledge of software development literature. Strong preference will be given to candidates with extensive training and/or public speaking experience. Candidates must have "leadership" level capability in at least three of the following knowledge areas:</p>
<ul>
<li>Software Project Management </li>
<li>Software Requirements </li>
<li>Software Design </li>
<li>Software Construction </li>
<li>Software Test </li>
<li>Software Quality Assurance </li>
<li>Software Maintenance </li>
<li>Software Tools and Methods </li>
<li>Software Configuration Management</li>
</ul>
<h3>Detailed Duties</h3>
<p><b>Seminar Delivery. </b>Deliver software engineering seminars to working software professionals at our Bellevue, Washington training facility and at client locations in North America and worldwide. For a list of current seminars, see our Course List by Job Title. Most seminars are 2-3 days in duration. The new TSP will qualify to teach 2-3 different seminars within the first year. Most Construx TSPs eventually qualify to teach 5-10 or more different seminars. </p>
<p><b>Seminar Development. </b>Develop new seminars to complement Construx&amp;rsquo;s list of current seminars. Modify existing seminars to respond to client needs, incorporate advances in methods, and match personal preferences of the TSP. </p>
<p><b>Consulting. </b>Provide consulting support to training clients as needed. Initially, consulting is expected to make up only a small part of this position. </p>
<p><b>Travel. </b>Travel is required and ranges from 25% to 75%. Exact amount of travel within this range will depend on business demands and will be decided by mutual agreement between the TSP and Construx. Most Construx TSPs travel between 25%-50%. </p>
<p><b>Support for Sales Process. </b>Work with Construx sales staff to discuss Construx offerings with new and existing clients, review proposals, and so on. </p>
<p><b>Ongoing Training and Professional Development. </b>Participate in ongoing training via reading, attending Construx seminars, attending outside seminars, and presenting at and participating in conferences. </p>
<p><b>On Location in Bellevue Office. </b>This position is homed at Construx&amp;rsquo;s office in Bellevue, Washington. Construx permits telecommuting as job demands allow, but normally expects that TSPs are in the Bellevue office at least four days per week when not traveling.&#160; </p>
<p><b>Inclusive Job Definition. </b>Over time, Construx will always work to maximize the match between a person&amp;rsquo;s interests and the available work. At any given time, however, Construx requires employees to be flexible in keeping the company running. At times this can include activities as mundane as stocking the soda machine, sweeping the floor, unjamming the printer, and so on. As a small company sometimes the job description is "Whatever needs to be done by whoever is available to do it." </p>
<h3>Requirements and Qualifications</h3>
<p><b>Software Development Experience. </b>A minimum of 10 years of broad and deep experience in software development is required. </p>
<p><b>Training Experience. </b>Must have excellent interpersonal verbal skills, presentation skills, and writing skills. Strong preference will be given to candidates with extensive training and/or public speaking experience. </p>
<p><b>Education. </b>A four-year degree from an accredited university strongly preferred. Broad and deep knowledge of current software development literature is also required. "Leadership" level understanding of at least three of the following knowledge areas is required: Software Project Management, Software Requirements, Software Process, Software Maintenance, Software Design, Software Construction, Software Test, Software Quality, Software Configuration Management, and Software Tools and Methods. </p>
<p><b>Certifications. </b>Certification is not required. Certifications of CSDP, PMP, Certified Scrum Trainer, Certified Scrum Coach, and Certified Scrum Practitioner are a plus.</p>
<p><b>Toolbox orientation. </b>Construx is not pro-Agile, pro-Scrum, pro-waterfall, pro-CMMI, pro-modeling, pro-PMI, or pro-anything else. Construx works with an incredible diversity of companies facing an amazing range of software challenges. Construx TSPs must have a balanced view of software development practices, viewing the universe of software development practices as different tools that are better suited or worse suited to specific jobs. Different TSPs will have different focuses, different interests, different backgrounds, and different areas of expertise. We do not expect every TSP to be enthusiastic about every kind of software practice, however we do require every TSP to be knowledgeable and open minded about the contributions that different kinds of practices are able to make in different contexts. </p>
<p><b>Presentations. </b>A record of conference presentations is a strong plus. </p>
<p><b>Publications. </b>A record of publications in refereed journals and/or popular trade publications is a plus. </p>
<p><b>Interpersonal Skills. </b>Require the ability to work well in a team; ability to foster partnership relationships, active communicator. </p>
<p><b>Service Quality Orientation. </b>Must be willing to pursue the highest possible levels of service quality, to measure personal service quality in ways that are visible to management and to peers, to discuss service quality, and to continually improve service quality. </p>
<p><b>Entrepreneurial. </b>Outstanding self-management, project management, and time management skills and the ability to multi-task and manage competing priorities effectively. Must be willing to accept a significant portion of compensation as variable, based on performance. <b><span style="color: rgb(6, 63, 144);"></span></b></p>
<p class="header_sub2"><b>Compensation</b></p>
<p>Compensation is base plus bonus. Base is approximately industry median with significant bonus potential. Construx has paid bonuses of 50%-100% or more depending on billable work. Amount of billable work depends on service quality, number of classes taught, willingness to travel, and other factors. Construx"s goal is to allow each TSP to strike the personal balance he or she desires between compensation/travel/work, and personal life. </p>
<p class="header_sub2"><b>Supervisory Responsibilities</b></p>
<p>None at present.</p>
<p class="header_sub2"><b>Interaction</b></p>
<p>This person will interact with customers, other TSPs, sales staff, department heads, and administrative support personnel.</p>
<p class="header_sub2"><b>Company and Work Environment</b></p>
<p>Founded in 1996, Construx"s corporate mission is to Advance the Art and Science of Commercial Software Engineering. Construx provides training and consulting services in the area of software development best practices. Our clients are who&amp;rsquo;s who of Fortune 500 companies, technology leaders, and selected smaller companies. </p>
<p>In June 2007 Construx was named the <span class="lightblue_nopadding"><b>Best Small Company to Work For in Washington State</b> </span>by <i>Washington CEO </i>magazine, and in Summer 2008 Construx was again named the Best Small Company to Work for in Washington State by the <i>Puget Sound Business Journal</i>. Construx emphasizes hiring talented staff who are committed to making consistently excellent contributions within a team environment. </p>
<p>Hiring only the best people enables us to offer a work environment and benefits that are second to none. Benefits include:</p>
<ul>
<li>Private offices with office decorating budget </li>
<li>Laptops </li>
<li>Flexible schedule (as permitted by work demands) </li>
<li>Business casual dress </li>
<li>30 days of paid time off per year plus holidays (including 4 floating holidays) </li>
<li>Strong company commitment to maintaining balance between work and personal life </li>
<li>401K with 100% match to 10% of base salary </li>
<li>Profit sharing </li>
<li>Stock options </li>
<li>Training as needed </li>
<li>Family medical plan </li>
<li>Family dental plan </li>
<li>Family vision plan </li>
<li>Long term disability coverage </li>
<li>Company events at the Salish Lodge, the Herb Farm, Willows Lodge, Leavenworth, Chelan, and comparable locations </li>
<li>Free all-company lunches every Friday </li>
<li>Free beverages </li>
<li>Staff that enjoys doing excellent work and enjoys working with each other</li>
</ul>
<h3>Contact Us About This Position</h3>
<p>Email: <a href="mailto:resume@construx.com">resume@construx.com</a><br />Fax: 425.636.0159 </p>
<p>&#160;</p>]]></description>
  <dc:creator>johnc</dc:creator>
  <dc:date>2010-02-16T18:12:00Z</dc:date>
  <content:encoded><![CDATA[<p>Construx is looking for a trainer/consultant. Construx has a fantastic staff and unmatched benefits. For the well qualified person who wants to do excellent work in a highly stimulating environment, it is a dream job -- which is why we've been recognized as the <b>Best Small Company to Work for in Washington State </b>twice.</p>
<p>Here's the official job posting:</p>
<hr />
<p>Construx is seeking an experienced software engineer to provide training and consulting services with an initial emphasis on training ("Technical Service Provider (TSP)"). Deep software development experience is required, as is broad and deep knowledge of software development literature. Strong preference will be given to candidates with extensive training and/or public speaking experience. Candidates must have "leadership" level capability in at least three of the following knowledge areas:</p>
<ul>
<li>Software Project Management </li>
<li>Software Requirements </li>
<li>Software Design </li>
<li>Software Construction </li>
<li>Software Test </li>
<li>Software Quality Assurance </li>
<li>Software Maintenance </li>
<li>Software Tools and Methods </li>
<li>Software Configuration Management</li>
</ul>
<p><b>Detailed Duties</b></p>
<p><b>Seminar Delivery. </b>Deliver software engineering seminars to working software professionals at our Bellevue, Washington training facility and at client locations in North America and worldwide. For a list of current seminars, see our Course List by Job Title. Most seminars are 2-3 days in duration. The new TSP will qualify to teach 2-3 different seminars within the first year. Most Construx TSPs eventually qualify to teach 5-10 or more different seminars.</p>
<p><b>Seminar Development. </b>Develop new seminars to complement Construx’s list of current seminars. Modify existing seminars to respond to client needs, incorporate advances in methods, and match personal preferences of the TSP.</p>
<p><b>Consulting. </b>Provide consulting support to training clients as needed. Initially, consulting is expected to make up only a small part of this position.</p>
<p><b>Travel. </b>Travel is required and ranges from 25% to 75%. Exact amount of travel within this range will depend on business demands and will be decided by mutual agreement between the TSP and Construx. Most Construx TSPs travel between 25%-50%.</p>
<p><b>Support for Sales Process. </b>Work with Construx sales staff to discuss Construx offerings with new and existing clients, review proposals, and so on.</p>
<p><b>Ongoing Training and Professional Development. </b>Participate in ongoing training via reading, attending Construx seminars, attending outside seminars, and presenting at and participating in conferences.</p>
<p><b>On Location in Bellevue Office. </b>This position is homed at Construx’s office in Bellevue, Washington. Construx permits telecommuting as job demands allow, but normally expects that TSPs are in the Bellevue office at least four days per week when not traveling.&#160;</p>
<p><b>Inclusive Job Definition. </b>Over time, Construx will always work to maximize the match between a person’s interests and the available work. At any given time, however, Construx requires employees to be flexible in keeping the company running. At times this can include activities as mundane as stocking the soda machine, sweeping the floor, unjamming the printer, and so on. As a small company sometimes the job description is "Whatever needs to be done by whoever is available to do it."</p>
<p><b>Requirements and Qualifications</b></p>
<p><b>Software Development Experience. </b>A minimum of 10 years of broad and deep experience in software development is required.</p>
<p><b>Training Experience. </b>Must have excellent interpersonal verbal skills, presentation skills, and writing skills. Strong preference will be given to candidates with extensive training and/or public speaking experience.</p>
<p><b>Education. </b>A four-year degree from an accredited university strongly preferred. Broad and deep knowledge of current software development literature is also required. "Leadership" level understanding of at least three of the following knowledge areas is required: Software Project Management, Software Requirements, Software Process, Software Maintenance, Software Design, Software Construction, Software Test, Software Quality, Software Configuration Management, and Software Tools and Methods.</p>
<p><b>Certifications. </b>Certification is not required. Certifications of CSDP, Certified Scrum Trainer, Certified Scrum Coach, and Certified Scrum Practitioner are a plus.</p>
<p><b>Toolbox orientation. </b>Construx is not pro-Agile, pro-Scrum, pro-waterfall, pro-CMMI, pro-modeling, or pro-anything else. Construx works with an incredible diversity of companies facing an amazing range of software challenges. Construx TSPs must have a balanced view of software development practices, viewing the universe of software development practices as different tools that are better suited or worse suited to specific jobs. Different TSPs will have different focuses, different interests, different backgrounds, and different areas of expertise. We do not expect every TSP to be enthusiastic about every kind of software practice, however we do require every TSP to be knowledgeable and open minded about the contributions that different kinds of practices are able to make in different contexts.</p>
<p><b>Presentations. </b>A record of conference presentations is a strong plus.</p>
<p><b>Publications. </b>A record of publications in refereed journals and/or popular trade publications is a plus.</p>
<p><b>Interpersonal Skills. </b>Require the ability to work well in a team; ability to foster partnership relationships, active communicator.</p>
<p><b>Service Quality Orientation. </b>Must be willing to pursue the highest possible levels of service quality, to measure personal service quality in ways that are visible to management and to peers, to discuss service quality, and to continually improve service quality.</p>
<p><b>Entrepreneurial. </b>Outstanding self-management, project management, and time management skills and the ability to multi-task and manage competing priorities effectively. Must be willing to accept a significant portion of compensation as variable, based on performance.</p>
<p><b>Compensation</b></p>
<p>Compensation is base plus bonus. Base is approximately industry median with significant bonus potential. Construx has paid bonuses of 50%-100% or more depending on billable work. Amount of billable work depends on service quality, number of classes taught, willingness to travel, and other factors. Construx's goal is to allow each TSP to strike the personal balance he or she desires between compensation/travel/work, and personal life. </p>
<p><b>Supervisory Responsibilities</b></p>
<p>None at present.</p>
<p><b>Interaction</b></p>
<p>This person will interact with customers, other TSPs, sales staff, department heads, and administrative support personnel.</p>
<p><b>Company and Work Environment</b></p>
<p>Founded in 1996, Construx's corporate mission is to Advance the Art and Science of Commercial Software Engineering. Construx provides training and consulting services in the area of software development best practices. Our clients are who’s who of Fortune 500 companies, technology leaders, and selected smaller companies. </p>
<p>In June 2007 Construx was named the<b> Best Small Company to Work For in Washington State</b> by <i>Washington CEO </i>magazine, and in Summer 2008 Construx was again named the Best Small Company to Work for in Washington State by the <i>Puget Sound Business Journal</i>. Construx emphasizes hiring talented staff who are committed to making consistently excellent contributions within a team environment. </p>
<p>Hiring only the best people enables us to offer a work environment and benefits that are second to none. Benefits include:</p>
<ul>
<li>Private offices with office decorating budget </li>
<li>Laptops </li>
<li>Flexible schedule (as permitted by work demands) </li>
<li>Business casual dress </li>
<li>30 days of paid time off per year plus holidays (including 4 floating holidays) </li>
<li>Strong company commitment to maintaining balance between work and personal life </li>
<li>401K with 100% match to 10% of base salary </li>
<li>Profit sharing </li>
<li>Stock options </li>
<li>Training as needed </li>
<li>Family medical plan </li>
<li>Family dental plan </li>
<li>Family vision plan </li>
<li>Long term disability coverage </li>
<li>Company events at the Salish Lodge, the Herb Farm, Willows Lodge, Leavenworth, Chelan, and comparable locations </li>
<li>Free all-company lunches every Friday </li>
<li>Free beverages </li>
<li>Staff that enjoys doing excellent work and enjoys working with each other</li>
</ul>
<p><b>Contact Us About This Position</b></p>
<p>Email: <a href="mailto:resume@construx.com">resume@construx.com</a><br />Fax: 425.636.0159 </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/2010_ECSE_Meeting_Topics_Announced/?blogid=23485">
  <title>2010 ECSE Meeting Topics Announced</title>
  <link>https://www.construx.com/10x_Software_Development/2010_ECSE_Meeting_Topics_Announced/?blogid=23485</link>
  <description><![CDATA[<p>The 2010 Executive Council for Software Excellence (ECSE) meeting topics have been announced. They are: <br /></p>
<div class="resourceBlock clearfix"><div class="resourceInnerBlock clearfix"><div class="rbLeft">January</div>
<div class="rbRight"><ul>
<li>Optimizing for Innovation </li>
</ul>
</div>
<div class="rbLeft">February</div>
<div class="rbRight"><ul>
<li>Accelerating Organizational Change </li>
</ul>
</div>
<div class="rbLeft">March</div>
<div class="rbRight"><ul>
<li>Successful Leadership in Software Development </li>
</ul>
</div>
<div class="rbLeft">April</div>
<div class="rbRight"><ul>
<li>Managing the Release Process </li>
</ul>
</div>
<div class="rbLeft">May</div>
<div class="rbRight"><ul>
<li>Managing "Core" Development (aka "shared services" or "foundations") </li>
</ul>
</div>
<div class="rbLeft">June</div>
<div class="rbRight"><ul>
<li>Succeeding with Crunch Mode Projects </li>
</ul>
</div>
<div class="rbLeft">July </div>
<div class="rbRight"><ul>
<li>The Business of Software Development </li>
</ul>
</div>
<div class="rbLeft">August</div>
<div class="rbRight"><ul>
<li><em>Summer Break</em>  </li>
</ul>
</div>
<div class="rbLeft">September </div>
<div class="rbRight"><ul>
<li>Working Effectively with the Executive Team </li>
</ul>
</div>
<div class="rbLeft">October </div>
<div class="rbRight"><ul>
<li>Managing Technical Debt </li>
</ul>
</div>
<div class="rbLeft">November </div>
<div class="rbRight"><ul>
<li>Agile Development at the Enterprise Level </li>
</ul>
</div>
<div class="rbLeft">December </div>
<div class="rbRight"><ul>
<li>Managing Global Development </li>
</ul>
</div>
</div>
</div>
<p>ECSE members are software executives and senior managers who have multi-project responsibility, typically with staffs of 100+. You can see more details at the <a target="_self" href="/Executive_Council_Software_Excellence/" title="ECSE Website"><font color="#0000ff">ECSE Website</font></a> (you'll need a free login to access this website).</p>
<p>The ECSE meets in-person in Bellevue, Washington the second Monday of each month from 5:00-7:00 pm, Pacific Time and via teleconference the Friday following the second Monday of each month from 12:00 noon -1:00 pm, Eastern time (9:00-10:00 am Pacific time).</p>
<p>If you're interested in joining the group, please <a target="_blank" href="mailto:stevemcc@construx.com">contact me</a>.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2010-01-19T13:46:22Z</dc:date>
  <content:encoded><![CDATA[<p>The 2010 Executive Council for Software Excellence (ECSE) meeting topics have been announced. They are: <br /></p>
<div class="resourceBlock clearfix"><div class="resourceInnerBlock clearfix"><div class="rbLeft">January</div>
<div class="rbRight"><ul>
<li>Optimizing for Innovation </li>
</ul>
</div>
<div class="rbLeft">February</div>
<div class="rbRight"><ul>
<li>Accelerating Organizational Change </li>
</ul>
</div>
<div class="rbLeft">March</div>
<div class="rbRight"><ul>
<li>Successful Leadership in Software Development </li>
</ul>
</div>
<div class="rbLeft">April</div>
<div class="rbRight"><ul>
<li>Managing the Release Process </li>
</ul>
</div>
<div class="rbLeft">May</div>
<div class="rbRight"><ul>
<li>Managing "Core" Development (aka "shared services" or "foundations") </li>
</ul>
</div>
<div class="rbLeft">June</div>
<div class="rbRight"><ul>
<li>Succeeding with Crunch Mode Projects </li>
</ul>
</div>
<div class="rbLeft">July </div>
<div class="rbRight"><ul>
<li>The Business of Software Development </li>
</ul>
</div>
<div class="rbLeft">August</div>
<div class="rbRight"><ul>
<li><em>Summer Break</em>  </li>
</ul>
</div>
<div class="rbLeft">September </div>
<div class="rbRight"><ul>
<li>Working Effectively with the Executive Team </li>
</ul>
</div>
<div class="rbLeft">October </div>
<div class="rbRight"><ul>
<li>Managing Technical Debt </li>
</ul>
</div>
<div class="rbLeft">November </div>
<div class="rbRight"><ul>
<li>Agile Development at the Enterprise Level </li>
</ul>
</div>
<div class="rbLeft">December </div>
<div class="rbRight"><ul>
<li>Managing Global Development </li>
</ul>
</div>
</div>
</div>
<p>ECSE members are software executives and senior managers who have multi-project responsibility, typically with staffs of 100+. You can see more details at the <a target="_self" href="https://www.construx.com/Executive_Council_Software_Excellence/" title="ECSE Website"><font color="#0000ff">ECSE Website</font></a> (you'll need a free login to access this website).</p>
<p>The ECSE meets in-person in Bellevue, Washington the second Monday of each month from 5:00-7:00 pm, Pacific Time and via teleconference the Friday following the second Monday of each month from 12:00 noon -1:00 pm, Eastern time (9:00-10:00 am Pacific time).</p>
<p>If you're interested in joining the group, please <a target="_blank" href="mailto:stevemcc@construx.com">contact me</a>.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Why_Requirements_Weren_t_More_Prominent_in_Construx_s_Classic_Mistakes_Survey/?blogid=23485">
  <title>Why Requirements Weren&#39;t More Prominent in Construx&#39;s Classic Mistakes Survey</title>
  <link>https://www.construx.com/10x_Software_Development/Why_Requirements_Weren_t_More_Prominent_in_Construx_s_Classic_Mistakes_Survey/?blogid=23485</link>
  <description><![CDATA[<p>A reader of our <a href="http://www.construx.com/Page.aspx?hid=2537" target="_blank">2008 Classic Mistakes White Paper</a> made the following observation:</p>
<span lang="EN"><blockquote><p>I work in the Aerospace/Defense industry and have read your article called Software Development"s Classic Mistakes 2008 dated July 2008. I am most interested in questioning the results of your most damaging classic mistakes overall that is tabulated in Table 8. I have read that up to 70% of project failures can be attributed to incomplete and poorly communicated requirements. Furthermore, the root cause of more than 50% of all errors identified in projects are introduced during the requirements analysis phase.</p>
<p>Could you please shed some light as to why the results of your study don"t cite mistakes that are attributed to requirements? Is this embedded in one or more of the tasks or is this a non-issue?</p>
</blockquote>
<span lang="EN"><p>The reader is correct that multiple industry studies have found that requirements problems are the most common source of project challenges, so I can see why our results might seem anomalous. </p>
<p>The fact is that people who took our survey were given the chance to rate requirements as severe classic mistakes, and they just didn"t. We included several classic mistakes in our study related to requirements:</p>
<ul>
<li>Feature creep</li>
<li>Shortchanged upstream activities</li>
<li>Lack of user involvement</li>
<li>Unclear project vision</li>
<li>Requirements gold plating</li>
</ul>
<p>Of these requirements-related mistakes, feature creep made the overall top 10 list (at #7). It was also the 6th most commonly reported mistake. None of the other requirements-related mistakes made the top 10 list for frequency, and none of them including feature creep made the top 10 list for severity.</p>
<p>Based on our consulting experience I am not that surprised to see non-requirements mistakes percolate to the top of the classic mistakes list. Some of the other studies I"ve seen didn"t offer the option to choose some of the problems we listed in our survey, which means their survey respondents didn"t have the chance to rank them higher than requirements problems. </p>
<p>Some studies I"ve seen survey only project managers, which could give a one-sided view of which mistakes are most common. And many of the surveys I"ve seen focus only on business systems projects (most notably, the Standish Group survey), whereas our data set was for a broader set of projects. </p>
<p>We"ve also found in many cases that requirements problems are symptoms of other issues, such as overly optimistic schedules (leading to shortchanging requirements), unrealistic expectations (same issue), short-changed QA (don"t detect requirements problems until late), etc. </p>
<p>We don"t have a classic mistake called simply "bad requirements" or anything comparable to that. Maybe we should add that. </p>
<h3>Classic Mistakes Update</h3>
<p>We"re updating our Classic mistakes survey in 2010. Help update these results, and <a href="https://vovici.com/wsb.dll/s/10431g2996e" target="_blank">take the survey</a>!</p>
</span><h3><p>Related Articles</p>
</h3>
<ul>
<li><a href="http://www.construx.com/Page.aspx?hid=2537" target="_blank">Construx"s classic mistakes white paper 2008</a>  </li>
<li>My <a href="http://blogs.construx.com/blogs/stevemcc/archive/2008/05/13/Software_2700_s-Classic-Mistakes_2D002D00_2008.aspx" target="_blank">blog summary</a> of our classic mistakes survey results</li>
<li>Take the <a href="https://vovici.com/wsb.dll/s/10431g2996e" target="_blank">2010 Classic Mistakes survey</a>!</li>
</ul>
</span>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2010-01-04T14:04:31Z</dc:date>
  <content:encoded><![CDATA[<p>A reader of our <a href="https://www.construx.com/classic/" target="_blank">2008 Classic Mistakes White Paper</a> made the following observation:</p>
<p class="indentLeft">I work in the Aerospace/Defense industry and have read your article called Software Development's Classic Mistakes 2008 dated July 2008. I am most interested in questioning the results of your most damaging classic mistakes overall that is tabulated in Table 8. I have read that up to 70% of project failures can be attributed to incomplete and poorly communicated requirements. Furthermore, the root cause of more than 50% of all errors identified in projects are introduced during the requirements analysis phase.</p>
<p class="indentLeft">Could you please shed some light as to why the results of your study don't cite mistakes that are attributed to requirements? Is this embedded in one or more of the tasks or is this a non-issue?</p>
<p>The reader is correct that multiple industry studies have found that requirements problems are the most common source of project challenges, so I can see why our results might seem anomalous.</p>
<p>The fact is that people who took our survey were given the chance to rate requirements as severe classic mistakes, and they just didn't. We included several classic mistakes in our study related to requirements:</p>
<ul>
<li>Feature creep</li>
<li>Shortchanged upstream activities</li>
<li>Lack of user involvement</li>
<li>Unclear project vision</li>
<li>Requirements gold plating</li>
</ul>
<p>Of these requirements-related mistakes, feature creep made the overall top 10 list (at #7). It was also the 6th most commonly reported mistake. None of the other requirements-related mistakes made the top 10 list for frequency, and none of them including feature creep made the top 10 list for severity.</p>
<p>Based on our consulting experience I am not that surprised to see non-requirements mistakes percolate to the top of the classic mistakes list. Some of the other studies I've seen didn't offer the option to choose some of the problems we listed in our survey, which means their survey respondents didn't have the chance to rank them higher than requirements problems.</p>
<p>Some studies I've seen survey only project managers, which could give a one-sided view of which mistakes are most common. And many of the surveys I've seen focus only on business systems projects (most notably, the Standish Group survey), whereas our data set was for a broader set of projects.</p>
<p>We've also found in many cases that requirements problems are symptoms of other issues, such as overly optimistic schedules (leading to shortchanging requirements), unrealistic expectations (same issue), short-changed QA (don't detect requirements problems until late), etc.</p>
<p>We don't have a classic mistake called simply "bad requirements" or anything comparable to that. Maybe we should add that.</p>
<p><strong>Classic Mistakes Update</strong></p>
<p>We're updating our Classic mistakes survey in 2010. Help update these results, and <a href="https://vovici.com/wsb.dll/s/10431g2996e" target="_blank">take the survey</a>!</p>
<p><strong>Related Articles</strong></p>
<ul>
<li><a href="https://www.construx.com/classic/" target="_blank">Construx's classic mistakes white paper 2008</a>  </li>
<li>My <a href="https://www.construx.com/10x_Software_Development/Software_s_Classic_Mistakes--2008/" target="_blank">blog summary</a> of our classic mistakes survey results</li>
<li>Take the <a href="https://vovici.com/wsb.dll/s/10431g2996e" target="_blank">2010 Classic Mistakes survey!</a>  </li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Travel_Restrictions_and_Offshore_Development/?blogid=23485">
  <title>Travel Restrictions and Offshore Development</title>
  <link>https://www.construx.com/10x_Software_Development/Travel_Restrictions_and_Offshore_Development/?blogid=23485</link>
  <description><![CDATA[<p>One benefit of my job is that I get to talk to people from hundreds of companies every year, and the people I work with talk to even more people. In recent discussions I"ve seen a disturbing trend emerging -- disturbing because it"s so common and because the effects are so easily predictable. </p>
<p>With the economic challenges many companies are facing, many companies have imposed travel restrictions that in practice are working out to "zero travel." I understand the value of this as a <em>general </em>cost containment measure. However, we are seeing increasing numbers of companies that have also applied this travel restriction to their offshore projects -- meaning no one from their domestic group can spend time with their offshore groups, and no one from the offshore groups can travel to their domestic locations. </p>
<p>The problem is this: Offshore development is challenging enough when you do everything right. Face-to-face time is an essential part of successful multi-site development. Video conferencing, web conferencing, etc. are all useful <em>supplements </em>to face-to-face time, but there is no good substitute for meeting the people you work with in person, meeting their families, having dinner and drinks together, playing soccer together -- that is, getting to know the other people as human beings.  </p>
<p>When crunch time hits, teams are a lot more effective when they"re working with their "friends in another country" than when they"re working with "those stupid offshore idiots who never understand our designs or requirements." </p>
<p>One executive put it this way: <font color="#0000ff"><strong>"The half life of trust is 6 weeks,"</strong></font> where trust is based on face-to-face communication. As face-to-face time drops, the consequences are easy to predict:</p>
<ol>
<li>Significantly increased communication mistakes</li>
<li>Problems in requirements, designs, test cases, etc. due to #1</li>
<li>Significantly increased defects due to #1 and #2</li>
<li>Increased friction between domestic vs. offshore groups due to #1-#3</li>
<li>Reduced trust due to all the above</li>
<li>More "us vs. them" thinking</li>
<li>Less ability to work through problems as they arise due to #1-#6</li>
<li>Overall reduction in effectiveness of the offshore operation due to all the above</li>
</ol>
<p>The overarching issue here is that the consideration that most commonly causes companies to go offshore is cost reduction, so it"s not hard to understand why companies that go offshore to reduce costs would also eliminate travel in these challenging times for the same reason. Unfortunately that combination creates a "perfect storm" of factors that over time will render offshore development unworkable.  </p>
<p>It"s been a few years since I"ve seen a widespread pattern like this where the consequences were both so damaging and so predictable. In all the companies Construx has worked with, I can"t think of a single case in which offshore development has been successful for a company that didn"t commit to significant face time between teams at different sites. If the current travel moratorium lasts 3-6 months, most companies will probably be able to recover. If it lasts much longer, I think we"ll start to see companies "reevaluating their offshore strategies" -- and overlooking the fact that offshoring stopped working when their people stopped traveling. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2009-08-06T14:40:06Z</dc:date>
  <content:encoded><![CDATA[<p>One benefit of my job is that I get to talk to people from hundreds of companies every year, and the people I work with talk to even more people. In recent discussions I've seen a disturbing trend emerging -- disturbing because it's so common and because the effects are so easily predictable.</p>
<p>With the economic challenges many companies are facing, many companies have imposed travel restrictions that in practice are working out to "zero travel." I understand the value of this as a <em>general </em>cost containment measure. However, we are seeing increasing numbers of companies that have also applied this travel restriction to their offshore projects -- meaning no one from their domestic group can spend time with their offshore groups, and no one from the offshore groups can travel to their domestic locations.</p>
<p>The problem is this: Offshore development is challenging enough when you do everything right. Face-to-face time is an essential part of successful multi-site development. Video conferencing, web conferencing, etc. are all useful <em>supplements </em>to face-to-face time, but there is no good substitute for meeting the people you work with in person, meeting their families, having dinner and drinks together, playing soccer together -- that is, getting to know the other people as human beings.</p>
<p>When crunch time hits, teams are a lot more effective when they're working with their "friends in another country" than when they're working with "those stupid offshore idiots who never understand our designs or requirements."</p>
<p>One executive put it this way: <strong>"The half life of trust is 6 weeks,"</strong> where trust is based on face-to-face communication. As face-to-face time drops, the consequences are easy to predict:</p>
<ol class="num">
<li>Significantly increased communication mistakes</li>
<li>Problems in requirements, designs, test cases, etc. due to #1</li>
<li>Significantly increased defects due to #1 and #2</li>
<li>Increased friction between domestic vs. offshore groups due to #1-#3</li>
<li>Reduced trust due to all the above</li>
<li>More "us vs. them" thinking</li>
<li>Less ability to work through problems as they arise due to #1-#6</li>
<li>Overall reduction in effectiveness of the offshore operation due to all the above</li>
</ol>
<p>The overarching issue here is that the consideration that most commonly causes companies to go offshore is cost reduction, so it's not hard to understand why companies that go offshore to reduce costs would also eliminate travel in these challenging times for the same reason. Unfortunately that combination creates a "perfect storm" of factors that over time will render offshore development unworkable.</p>
<p>It's been a few years since I've seen a widespread pattern like this where the consequences were both so damaging and so predictable. In all the companies Construx has worked with, I can't think of a single case in which offshore development has been successful for a company that didn't commit to significant face time between teams at different sites. If the current travel moratorium lasts 3-6 months, most companies will probably be able to recover. If it lasts much longer, I think we'll start to see companies "reevaluating their offshore strategies" -- and overlooking the fact that offshoring stopped working when their people stopped traveling.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/State_of_the_Practice_Survey/?blogid=23485">
  <title>State of the Practice Survey</title>
  <link>https://www.construx.com/10x_Software_Development/State_of_the_Practice_Survey/?blogid=23485</link>
  <description><![CDATA[<p><span style="FONT-FAMILY: Arial">Construx has developed the <strong>State of the Practice Survey</strong> with the goal of better understanding which software practices really work, which really don"t work, and </span><span style="FONT-FAMILY: Arial">identify trends in practice adoption. </span></p>
<p><span style="FONT-FAMILY: Arial">Survey participants will receive a summary report of the findings later this year in advance of the published report.</span></p>
<p><span style="FONT-FAMILY: Arial">I hope you will share your views about the state of the practices in your organization. No one outside Construx will see any of the raw data, and information you share will be presented only in the form of summary statistics. </span></p>
<p><span style="FONT-FAMILY: Arial">I invite you to participate in the survey: <span style="FONT-FAMILY: Verdana; COLOR: rgb(102,102,102); FONT-SIZE: 8.5pt"><a title="https://vovici.com/wsb.dll/s/10431g3c3a5" href="https://vovici.com/wsb.dll/s/10431g3c3a5" target="_blank"><font color="#0000ff">https://vovici.com/wsb.dll/s/10431g3c3a5</font></a></span></span></p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2009-07-04T09:21:00Z</dc:date>
  <content:encoded><![CDATA[<p>Construx has developed the <strong>State of the Practice Survey</strong> with the goal of better understanding which software practices really work, which really don't work, and identify trends in practice adoption.</p>
<p>Survey participants will receive a summary report of the findings later this year in advance of the published report.</p>
<p>I hope you will share your views about the state of the practices in your organization. No one outside Construx will see any of the raw data, and information you share will be presented only in the form of summary statistics.</p>
<p>I invite you to participate in the survey: <a title="https://vovici.com/wsb.dll/s/10431g3c3a5" href="https://vovici.com/wsb.dll/s/10431g3c3a5" target="_blank">https://vovici.com/wsb.dll/s/10431g3c3a5 </a></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Facebook_Page/?blogid=23485">
  <title>Facebook Page</title>
  <link>https://www.construx.com/10x_Software_Development/Facebook_Page/?blogid=23485</link>
  <description><![CDATA[<p>I now have a public Facebook page at <span lang="EN"></span><a href="http://www.facebook.com/n/?pages/Steve-McConnell/198720075270&amp;mid=8a4602G316afb94G1ae8a37G4c"><span style="text-decoration: underline;"><span style="color: rgb(0, 0, 255); font-size: x-small;"><u><font color="#0000ff" size="2"><span lang="EN">http://www.facebook.com/n/?pages/Steve-McConnell/198720075270&amp;mid=8a4602G316afb94G1ae8a37G4c</span></font></u></span><u><span lang="EN"></span></u></span><span lang="EN"></span></a>. </p>
<p>I plan to use this page for small scale blog entries, updates on what I"m reading, announcements, and so on. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2009-06-22T11:13:00Z</dc:date>
  <content:encoded><![CDATA[<p>I now have a public Facebook page at <a href="http://www.facebook.com/n/?pages/Steve-McConnell/198720075270&amp;mid=8a4602G316afb94G1ae8a37G4c">http://www.facebook.com/n/?pages/Steve-McConnell/198720075270&amp;mid=8a4602G316afb94G1ae8a37G4c </a>.</p>
<p>I plan to use this page for small scale blog entries, updates on what I'm reading, announcements, and so on.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Free_Webinar__10_Deadly_Sins_of_Software_Estimation/?blogid=23485">
  <title>Free Webinar: 10 Deadly Sins of Software Estimation</title>
  <link>https://www.construx.com/10x_Software_Development/Free_Webinar__10_Deadly_Sins_of_Software_Estimation/?blogid=23485</link>
  <description><![CDATA[<p>I"ll be giving a free webinar tomorrow at 10:00 am Pacific time on the 10 Deadly Sins of Software Estimation. You can sign up here:</p>
<p><a href="http://www.sdtimes.com/content/webinars.aspx">http://www.sdtimes.com/content/webinars.aspx</a></p>
<p>Here"s the full announcement:</p>
<p>The average project overruns its planned budget and schedule by 50%-80%. In practice, little work is done that could truly be called "estimation." Many projects are scheduled using a combination of legitimate business targets and liberal doses of wishful thinking. In this talk, award-winning author Steve McConnell presents 10 of the worst ways estimates go wrong, and presents time-tested rules of thumb for dramatically improving estimation accuracy</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2009-06-22T11:05:09Z</dc:date>
  <content:encoded><![CDATA[<p>I'll be giving a free webinar tomorrow at 10:00 am Pacific time on the 10 Deadly Sins of Software Estimation. You can sign up here:</p>
<p><a href="http://www.sdtimes.com/content/webinars.aspx">http://www.sdtimes.com/content/webinars.aspx</a></p>
<p>Here's the full announcement:</p>
<p>The average project overruns its planned budget and schedule by 50%-80%. In practice, little work is done that could truly be called "estimation." Many projects are scheduled using a combination of legitimate business targets and liberal doses of wishful thinking. In this talk, award-winning author Steve McConnell presents 10 of the worst ways estimates go wrong, and presents time-tested rules of thumb for dramatically improving estimation accuracy</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Next_Generation_Project_Planning_Tool__LiquidPlanner_2_0/?blogid=23485">
  <title>Next Generation Project Planning Tool: LiquidPlanner 2.0</title>
  <link>https://www.construx.com/10x_Software_Development/Next_Generation_Project_Planning_Tool__LiquidPlanner_2_0/?blogid=23485</link>
  <description><![CDATA[<p>I receive several requests a year to sit on various advisory boards, and I always say no--I just don"t have the time. Last year I received a request I couldn"t refuse from Charles Seybold, Bruce Henry, and Jason Carlson at <a href="http://www.liquidplanner.com" target="_blank">LiquidPlanner</a>. I had known Charles and Bruce when they were at Expedia and thought highly of their work, but the real appeal was the tool they were building. </p>
<p>They started with the vision of an online project planning tool that would include <a href="http://www.construx.com/Page.aspx?hid=1648" target="_blank">probabilistic scheduling</a>, in a sense a more flexible, on-line replacement for Microsoft Project. As LiquidPlanner took shape, their tool concept grew far beyond a Project replacement. The name of their tool is apt: Liquid Planner has created an online project community that supports work in modern-style projects and managing them far better than any other tool I"ve seen. </p>
<p>Key features of the tool include:</p>
<ul>
<li>Online tool can be used by individual contributors at different development sites </li>
<li>Individual contributors enter and update their own estimates, priorities, dependencies, and so on; the tool calculates the overall project plan </li>
<li>Dashboard allows a "big picture view" of the whole project </li>
<li>Individuals can view their own tasks and the tasks lists for other team members </li>
<li>Task-level estimates can be entered in ranges; LP computes the overall project "landing zone" </li>
<li>Integrated email and issue tracking </li>
<li>"Workspace chatter" allows project members to collaborate on tasks, ask questions, throw out ideas, and so on, all the while maintaining discussion threads for future reference. A wiki-like area allows for central storage of reference information about the project </li>
<li>Time tracking is integrated</li>
</ul>
<p>LP recently released LiquidPlanner 2.0, and I think this release achieves the elusive goal of <em>synergy</em>--where the interactions between the different parts add capability that goes well beyond each part considered individually. </p>
<p>For example, we"ve seen <em>time tracking </em>fail in many organizations because it"s a standalone activity whose purpose has been poorly communicated, and many people just refuse to do it. In LP, time tracking is integrated with estimation, scheduling, and the online project community. There"s no task-switching overhead to enter time into a different tool, and the purpose is much clearer (entering actuals against estimates). Time tracking becomes a seamless part of working on a project.</p>
<p>Another example is <em>bottom-up task estimates</em>. In other tools, individuals create estimates for their own work, perhaps in a spreadsheet, give them to their manager, who re-enters them into Project or perhaps a different spreadsheet. The manager tracks progress by going around and asking people what they"ve completed. Estimation is done in one environment, planning is done in another environment, tracking is done in a third environment, and so on. In such an environment estimates often get of date; we"ve even seen estimates entered <em>post facto</em>, i.e., after the work has been done. In LP, estimating the work, organizing the work, tracking the work, and commenting on the work are all integrated into the same tool. </p>
<p>LP becomes a project ecosystem in which it"s just easier for the team to stay in the environment than to move out of it, and have the team working in a planning-aware environment produces all kinds of benefits. </p>
<p>Liquid Planner calls all this <em>Social Project Management</em>. In essence it simultaneously democratizes the project management task by facilitating greater contributions from all team members while empowering project managers with richer, more detailed, and more current project information. LP offers a 30-day free trial, and I encourage you to check it out. </p>
<ul>
<li><a href="http://www.liquidplanner.com" target="_blank">Liquid Planner Home Page</a>  </li>
<li><a href="http://www.liquidplanner.com/features" target="_blank">Feature Summary</a>  </li>
<li><a href="http://www.liquidplanner.com/screenshots" target="_blank">Screen Shots</a></li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2009-03-02T14:33:00Z</dc:date>
  <content:encoded><![CDATA[<p>I receive several requests a year to sit on various advisory boards, and I always say no--I just don't have the time. Last year I received a request I couldn't refuse from Charles Seybold, Bruce Henry, and Jason Carlson at <a href="http://www.liquidplanner.com" target="_blank">LiquidPlanner</a>. I had known Charles and Bruce when they were at Expedia and thought highly of their work, but the real appeal was the tool they were building.</p>
<p>They started with the vision of an online project planning tool that would include <a href="https://www.construx.com/Thought_Leadership/Books/The_Cone_of_Uncertaintys/" target="_blank">probabilistic scheduling</a>, in a sense a more flexible, on-line replacement for Microsoft Project. As LiquidPlanner took shape, their tool concept grew far beyond a Project replacement. The name of their tool is apt: Liquid Planner has created an online project community that supports work in modern-style projects and managing them far better than any other tool I've seen.</p>
<p>Key features of the tool include:</p>
<ul>
<li>Online tool can be used by individual contributors at different development sites </li>
<li>Individual contributors enter and update their own estimates, priorities, dependencies, and so on; the tool calculates the overall project plan </li>
<li>Dashboard allows a "big picture view" of the whole project </li>
<li>Individuals can view their own tasks and the tasks lists for other team members </li>
<li>Task-level estimates can be entered in ranges; LP computes the overall project "landing zone" </li>
<li>Integrated email and issue tracking </li>
<li>"Workspace chatter" allows project members to collaborate on tasks, ask questions, throw out ideas, and so on, all the while maintaining discussion threads for future reference. A wiki-like area allows for central storage of reference information about the project </li>
<li>Time tracking is integrated</li>
</ul>
<p>LP recently released LiquidPlanner 2.0, and I think this release achieves the elusive goal of <em>synergy</em>--where the interactions between the different parts add capability that goes well beyond each part considered individually.</p>
<p>For example, we've seen <em>time tracking </em>fail in many organizations because it's a standalone activity whose purpose has been poorly communicated, and many people just refuse to do it. In LP, time tracking is integrated with estimation, scheduling, and the online project community. There's no task-switching overhead to enter time into a different tool, and the purpose is much clearer (entering actuals against estimates). Time tracking becomes a seamless part of working on a project.</p>
<p>Another example is <em>bottom-up task estimates</em>. In other tools, individuals create estimates for their own work, perhaps in a spreadsheet, give them to their manager, who re-enters them into Project or perhaps a different spreadsheet. The manager tracks progress by going around and asking people what they've completed. Estimation is done in one environment, planning is done in another environment, tracking is done in a third environment, and so on. In such an environment estimates often get of date; we've even seen estimates entered <em>post facto</em>, i.e., after the work has been done. In LP, estimating the work, organizing the work, tracking the work, and commenting on the work are all integrated into the same tool.</p>
<p>LP becomes a project ecosystem in which it's just easier for the team to stay in the environment than to move out of it, and have the team working in a planning-aware environment produces all kinds of benefits.</p>
<p>Liquid Planner calls all this <em>Social Project Management</em>. In essence it simultaneously democratizes the project management task by facilitating greater contributions from all team members while empowering project managers with richer, more detailed, and more current project information. LP offers a 30-day free trial, and I encourage you to check it out.</p>
<ul>
<li><a href="http://www.liquidplanner.com" target="_blank">Liquid Planner Home Page</a>  </li>
<li><a href="http://www.liquidplanner.com/features" target="_blank">Feature Summary</a>  </li>
<li><a href="http://www.liquidplanner.com/screenshots" target="_blank">Screen Shots</a></li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Construx_Offers_Free_Training_for_Laid-Off_Software_Workers/?blogid=23485">
  <title>Construx Offers Free Training for Laid-Off Software Workers</title>
  <link>https://www.construx.com/10x_Software_Development/Construx_Offers_Free_Training_for_Laid-Off_Software_Workers/?blogid=23485</link>
  <description><![CDATA[<p>After listening to doom and gloom economic reports for the past few months, we decided we would try to do something to brighten our little corner of the world. Here"s our official press release about it:</p>
<p><em><strong>Construx Software has designated 25% of its public seminar seats free of charge to software workers who have been laid off. Construx seminars help software professionals improve their technical and managerial skills. Seminar attendees will be more effective when they reenter the workforce. Construx hopes this program will help laid-off software workers reenter the workforce more quickly.  </strong></em></p>
<p>Bellevue, WA, 12 February 2009 -- Construx Software today announced a complimentary program for training software workers who have been laid off during the recent economic downturn. Construx has designated 25% of the seats in its Software Development Best Practices training seminars free to personnel who have been laid off from professional software development jobs. </p>
<p>"During the dot com collapse the software industry was at the epicenter of the recession. Most of our clients were affected, and that meant we were affected," said Steve McConnell, Construx CEO and author of several best selling software development books. "We remember what it was like during the last downturn, and we are fortunate this time to be in a position to extend a helping hand to our friends whose companies are struggling." </p>
<p>One of Construx"s corporate values is Sharing the Wealth. "Companies that are strong enough to lead us out of the recession should help in whatever ways we can," McConnell stated. "Professionals can take advantage of their downtime to improve their skills in ways that we hope will accelerate their job searches and continue to provide career-long benefits after they re-enter the workforce." </p>
<p>During boom periods many software professionals have difficulty finding time to sharpen their skills. "Our seminars focus on developing the skills needed to deliver world-class software. We want software people to be able to build their careers, whether they have a job at the moment or not," Mark Nygren, Construx"s COO stated. </p>
<p>Construx 1-, 2-, and 3-day seminars cover subjects including Software Project Management, Software Estimation, Software Requirements, Software Design, Software Testing, and numerous other software development topics. Construx offers more than 50 public seminars each year at its training facility in Bellevue, Washington. </p>
<p>Construx"s SPEAR (Software Professional Educational Assistance and Re-entry) program is slated to continue through June 2009. Software professionals who would like to participate in the SPEAR program should see SPEAR program details on the web at <a href="http://www.construx.com/spear">http://www.construx.com/spear</a>. A full schedule of Construx"s upcoming public seminars can be found on the web at <a href="http://www.construx.com/calendar">http://www.construx.com/calendar</a><a href="http://www.construx.com/blogs/calendar"><span style="color: rgb(0, 0, 255);"></span></a>. </p>
<p><strong>About Construx Software </strong></p>
<p>Construx Software is the market leader in software development best practices training and consulting. Construx was founded in 1996 by Steve McConnell, respected author and thought leader on software development best practices. McConnell"s books Code Complete, Rapid Development, and other titles are some of the most accessible books on software development with a million copies in print in 20 languages. McConnell"s passion for advancing the art and science of commercial software engineering is shared by Construx"s seasoned consultants. Their depth of knowledge and expertise have helped hundreds of companies solve their software challenges by identifying and adopting practices proven to produce high quality software - faster, and with greater predictability. To learn more about Construx training and consulting services, visit Construx"s Web site at <a href="http://www.construx.com/"><span style="color: rgb(129, 0, 129);">http://www.construx.com</span></a>. or call +1-866-296-6300.  </p>
<p>Contact:<br />Pam McIlroy, Marketing Manager<br /><a href="mailto:pam.mcilroy@construx.com"><span style="color: rgb(0, 0, 255);">pam.mcilroy@construx.com</span></a> <br />425.636.0116</p>
<p>###</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2009-02-24T10:51:00Z</dc:date>
  <content:encoded><![CDATA[<p>http://www.construx.com/blogs/calendar </p>
<p>After listening to doom and gloom economic reports for the past few months, we decided we would try to do something to brighten our little corner of the world. Here's our official press release about it:</p>
<p><em><strong>Construx Software has designated 25% of its public seminar seats free of charge to software workers who have been laid off. Construx seminars help software professionals improve their technical and managerial skills. Seminar attendees will be more effective when they reenter the workforce. Construx hopes this program will help laid-off software workers reenter the workforce more quickly.  </strong></em></p>
<p>Bellevue, WA, 12 February 2009 -- Construx Software today announced a complimentary program for training software workers who have been laid off during the recent economic downturn. Construx has designated 25% of the seats in its Software Development Best Practices training seminars free to personnel who have been laid off from professional software development jobs.</p>
<p>"During the dot com collapse the software industry was at the epicenter of the recession. Most of our clients were affected, and that meant we were affected," said Steve McConnell, Construx CEO and author of several best selling software development books. "We remember what it was like during the last downturn, and we are fortunate this time to be in a position to extend a helping hand to our friends whose companies are struggling."</p>
<p>One of Construx's corporate values is Sharing the Wealth. "Companies that are strong enough to lead us out of the recession should help in whatever ways we can," McConnell stated. "Professionals can take advantage of their downtime to improve their skills in ways that we hope will accelerate their job searches and continue to provide career-long benefits after they re-enter the workforce."</p>
<p>During boom periods many software professionals have difficulty finding time to sharpen their skills. "Our seminars focus on developing the skills needed to deliver world-class software. We want software people to be able to build their careers, whether they have a job at the moment or not," Mark Nygren, Construx's COO stated.</p>
<p>Construx 1-, 2-, and 3-day seminars cover subjects including Software Project Management, Software Estimation, Software Requirements, Software Design, Software Testing, and numerous other software development topics. Construx offers more than 50 public seminars each year at its training facility in Bellevue, Washington.</p>
<p>Construx's SPEAR (Software Professional Educational Assistance and Re-entry) program is slated to continue through June 2009. Software professionals who would like to participate in the SPEAR program should see SPEAR program details on the web at <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">http://www.construx.com/spear</a>. A full schedule of Construx's upcoming public seminars can be found on the web at <a href="https://www.construx.com/Seminars/">http://www.construx.com/calendar</a><a href="https://www.construx.com/Seminars/"></a>.</p>
<span>About Construx Software</span><p>Construx Software is the market leader in software development best practices training and consulting. Construx was founded in 1996 by Steve McConnell, respected author and thought leader on software development best practices. McConnell's books Code Complete, Rapid Development, and other titles are some of the most accessible books on software development with a million copies in print in 20 languages. McConnell's passion for advancing the art and science of commercial software engineering is shared by Construx's seasoned consultants. Their depth of knowledge and expertise have helped hundreds of companies solve their software challenges by identifying and adopting practices proven to produce high quality software - faster, and with greater predictability. To learn more about Construx training and consulting services, visit Construx's Web site at <a href="https://www.construx.com/Home/">http://www.construx.com</a>. or call +1-866-296-6300. </p>
<p>Contact:<br />Pam McIlroy, Marketing Manager<br /><a href="mailto:pam.mcilroy@construx.com">pam.mcilroy@construx.com</a> <br />425.636.0116</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/2009_ECSE_Meeting_Topics_Announced/?blogid=23485">
  <title>2009 ECSE Meeting Topics Announced</title>
  <link>https://www.construx.com/10x_Software_Development/2009_ECSE_Meeting_Topics_Announced/?blogid=23485</link>
  <description><![CDATA[<p>The 2009 Executive Council for Software Excellence (ECSE) meeting topics have been announced. They are:</p>
<div class="resourceBlock clearfix"><div class="resourceInnerBlock clearfix"><div class="rbLeft">January </div>
<div class="rbRight"><ul>
<li>Successful Leadership in Software Development</li>
</ul>


 </div>
<div class="rbLeft">February </div>
<div class="rbRight"><ul>
<li>Overcoming a Legacy of Poor Quality</li>
</ul>


 </div>
<div class="rbLeft">March </div>
<div class="rbRight"><ul>
<li>Organizational Structures</li>
</ul>


 </div>
<div class="rbLeft">April </div>
<div class="rbRight"><ul>
<li>Accelerating Organizational Change</li>
</ul>


 </div>
<div class="rbLeft">May</div>
<div class="rbRight"><ul>
<li>Working Effectively with the Executive Team </li>
</ul>


 </div>
<div class="rbLeft">June</div>
<div class="rbRight"><ul>
<li>Working with Distributed/Offshore Teams </li>
</ul>


 </div>
<div class="rbLeft">July</div>
<div class="rbRight"><ul>
<li>The Business of Software Development</li>
</ul>


 </div>
<div class="rbLeft">August</div>
<div class="rbRight"><ul>
<li>Game Night: Software Project Simulation Board Games (Bellevue)<br />Summer break (dial-in) </li>
</ul>


 </div>
<div class="rbLeft">September</div>
<div class="rbRight"><ul>
<li>Legacy Systems Issues: Support vs. development, managing technical debt, deciding when to rework and when to replace </li>
</ul>


 </div>
<div class="rbLeft">October</div>
<div class="rbRight"><ul>
<li>Improving Productivity </li>
</ul>


 </div>
<div class="rbLeft">November</div>
<div class="rbRight"><ul>
<li>Succeeding in Heterogenous Development Environments: Agile + Waterfall </li>
</ul>


 </div>
<div class="rbLeft">December</div>
<div class="rbRight"><ul>
<li>Project Portfolio Management</li>
</ul>


 </div>
</div>
</div>
<p>The ECSE meets in-person in Bellevue, Washington the second Monday of each month from 5:00-7:00 pm, Pacific Time and via teleconference the Friday following the second Monday of each month from 11:00 am-12:00 noon, Eastern time (8:00-9:00 am Pacific time).</p>
<p>ECSE members are software executives and senior managers who have multi-project responsibility, typically with staffs of 100+. You can see more details at the <a href="/Executive_Council_Software_Excellence/" target="_blank">ECSE Website</a> (you'll need a free login to access this website). If you're interested in joining the group, please <a href="mailto:stevemcc@construx.com" target="_blank">contact me</a>.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2009-01-07T13:51:04Z</dc:date>
  <content:encoded><![CDATA[<p>The 2009 Executive Council for Software Excellence (ECSE) meeting topics have been announced. They are:</p>
<div class="resourceBlock clearfix"><div class="resourceInnerBlock clearfix"><div class="rbLeft">January </div>
<div class="rbRight"><ul>
<li>Successful Leadership in Software Development</li>
</ul>


 </div>
<div class="rbLeft">February </div>
<div class="rbRight"><ul>
<li>Overcoming a Legacy of Poor Quality</li>
</ul>


 </div>
<div class="rbLeft">March </div>
<div class="rbRight"><ul>
<li>Organizational Structures</li>
</ul>


 </div>
<div class="rbLeft">April </div>
<div class="rbRight"><ul>
<li>Accelerating Organizational Change</li>
</ul>


 </div>
<div class="rbLeft">May</div>
<div class="rbRight"><ul>
<li>Working Effectively with the Executive Team </li>
</ul>


 </div>
<div class="rbLeft">June</div>
<div class="rbRight"><ul>
<li>Working with Distributed/Offshore Teams </li>
</ul>


 </div>
<div class="rbLeft">July</div>
<div class="rbRight"><ul>
<li>The Business of Software Development</li>
</ul>


 </div>
<div class="rbLeft">August</div>
<div class="rbRight"><ul>
<li>Game Night: Software Project Simulation Board Games (Bellevue)<br />Summer break (dial-in) </li>
</ul>


 </div>
<div class="rbLeft">September</div>
<div class="rbRight"><ul>
<li>Legacy Systems Issues: Support vs. development, managing technical debt, deciding when to rework and when to replace </li>
</ul>


 </div>
<div class="rbLeft">October</div>
<div class="rbRight"><ul>
<li>Improving Productivity </li>
</ul>


 </div>
<div class="rbLeft">November</div>
<div class="rbRight"><ul>
<li>Succeeding in Heterogenous Development Environments: Agile + Waterfall </li>
</ul>


 </div>
<div class="rbLeft">December</div>
<div class="rbRight"><ul>
<li>Project Portfolio Management</li>
</ul>


 </div>
</div>
</div>
<p>The ECSE meets in-person in Bellevue, Washington the second Monday of each month from 5:00-7:00 pm, Pacific Time and via teleconference the Friday following the second Monday of each month from 11:00 am-12:00 noon, Eastern time (8:00-9:00 am Pacific time).</p>
<p>ECSE members are software executives and senior managers who have multi-project responsibility, typically with staffs of 100+. You can see more details at the <a href="https://www.construx.com/Executive_Council_Software_Excellence/" target="_blank">ECSE Website</a> (you'll need a free login to access this website). If you're interested in joining the group, please <a href="mailto:stevemcc@construx.com" target="_blank">contact me</a>.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/New_White_Papers/?blogid=23485">
  <title>New White Papers</title>
  <link>https://www.construx.com/10x_Software_Development/New_White_Papers/?blogid=23485</link>
  <description><![CDATA[<p>We've recently posted a few new white papers on our website, along with some existing papers. These are free to members (and membership is free).</p>
<p><strong>10 Keys to Successful Scrum Adoption<br /></strong>Scrum is a project management approach for Agile software development and is the most commonly adopted Agile approach in the industry today. Construx has worked with hundreds of organizations to implement Agile approaches including Scrum. We have helped numerous organizations to adopt the core principles of Scrum and to adapt it based on their unique situations and challenges. This paper discusses ten keys to successful Scrum adoption identified during our consulting and training work with clients. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="/Resources/ResourceDetails/White_Papers/">www.construx.com/whitepapers</a></p>
<p><strong>Optimizing Agile for Your Organization</strong><br />Many organizations are interested in becoming Agile but wonder where to start. They want to ensure that their Agile adoption will achieve the desired benefits, goals, and objectives. This white paper will outline the major organization, cultural, and project considerations that are critical to a successful Agile adoption. It provides a starting point that works for most projects and organizations. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="/Resources/ResourceDetails/White_Papers/">www.construx.com/whitepapers</a></p>
<p><strong>Managing Technical Debt</strong><br />"Technical Debt" refers to delayed technical work that is incurred when technical short cuts are taken, <br />usually in pursuit of calendar-driven software schedules. Just like financial debt, some technical<br />debts can serve valuable business purposes. Other technical debts are simply counterproductive. <br />The ability to take on debt safely, track their debt, manage their debt, and pay down their debt varies among different organizations. Explicit decision making before taking on debt and more explicit tracking of debt are advised. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="/Resources/ResourceDetails/White_Papers/">www.construx.com/whitepapers</a></p>
<p><strong>Software Development's Classic Mistakes 2008</strong><br />Construx's Chief Software Engineer/CEO, Steve McConnell, introduced the concept of software<br />development's classic mistakes in his book Rapid Development. He defined "classic mistakes" as<br />mistakes that have been made so often, by so many people, that the consequences of making<br />these mistakes should be predictable and the mistakes themselves should be avoidable. <br />This whitepaper is the result of a survey of Approximately 500 software practitioners to determine<br />the frequency and severity of common software development mistakes. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="/Resources/ResourceDetails/White_Papers/">www.construx.com/whitepapers</a></p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-10-02T13:43:27Z</dc:date>
  <content:encoded><![CDATA[<p>We've recently posted a few new white papers on our website, along with some existing papers. These are free to members (and membership is free).</p>
<p><strong>10 Keys to Successful Scrum Adoption<br /></strong>Scrum is a project management approach for Agile software development and is the most commonly adopted Agile approach in the industry today. Construx has worked with hundreds of organizations to implement Agile approaches including Scrum. We have helped numerous organizations to adopt the core principles of Scrum and to adapt it based on their unique situations and challenges. This paper discusses ten keys to successful Scrum adoption identified during our consulting and training work with clients. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="http://www.construx.com/whitepapers">www.construx.com/whitepapers</a></p>
<p><strong>Optimizing Agile for Your Organization</strong><br />Many organizations are interested in becoming Agile but wonder where to start. They want to ensure that their Agile adoption will achieve the desired benefits, goals, and objectives. This white paper will outline the major organization, cultural, and project considerations that are critical to a successful Agile adoption. It provides a starting point that works for most projects and organizations. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="http://www.construx.com/whitepapers">www.construx.com/whitepapers</a></p>
<p><strong>Managing Technical Debt</strong><br />"Technical Debt" refers to delayed technical work that is incurred when technical short cuts are taken, <br />usually in pursuit of calendar-driven software schedules. Just like financial debt, some technical<br />debts can serve valuable business purposes. Other technical debts are simply counterproductive. <br />The ability to take on debt safely, track their debt, manage their debt, and pay down their debt varies among different organizations. Explicit decision making before taking on debt and more explicit tracking of debt are advised. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="http://www.construx.com/whitepapers">www.construx.com/whitepapers</a></p>
<p><strong>Software Development's Classic Mistakes 2008</strong><br />Construx's Chief Software Engineer/CEO, Steve McConnell, introduced the concept of software<br />development's classic mistakes in his book Rapid Development. He defined "classic mistakes" as<br />mistakes that have been made so often, by so many people, that the consequences of making<br />these mistakes should be predictable and the mistakes themselves should be avoidable. <br />This whitepaper is the result of a survey of Approximately 500 software practitioners to determine<br />the frequency and severity of common software development mistakes. <a title="http://cl.exct.net/?qs=36afefaf7ac151e7edfcd244439c87129e2e50d6d0b28901" href="http://www.construx.com/whitepapers">www.construx.com/whitepapers</a></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/In_Defense_of_the_Bill_Gates_/_Jerry_Seinfeld_Ad__2/?blogid=23485">
  <title>In Defense of the Bill Gates / Jerry Seinfeld Ad #2</title>
  <link>https://www.construx.com/10x_Software_Development/In_Defense_of_the_Bill_Gates_/_Jerry_Seinfeld_Ad__2/?blogid=23485</link>
  <description><![CDATA[<P>Say what you like about the new Bill Gates /Jerry Seinfeld ads, I have to approve Bill"s choice of bedtime reading. He"s reading from Section 18.2 of Code Complete 2. (It"s about 1:10 into the video.)</P><P><A href="http://www.youtube.com/watch?v=gBWPf1BWtkw" mce_href="http://www.youtube.com/watch?v=gBWPf1BWtkw">http://www.youtube.com/watch?v=gBWPf1BWtkw</A></P><P>I thought I was the only person who read Code Complete 2 aloud to put their kids to sleep!</P>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-09-24T11:22:00Z</dc:date>
  <content:encoded><![CDATA[<p>Say what you like about the new Bill Gates /Jerry Seinfeld ads, I have to approve Bill's choice of bedtime reading. He's reading from Section 18.2 of Code Complete 2. (It's about 1:10 into the video.)</p>
<p><a href="http://www.youtube.com/watch?v=gBWPf1BWtkw">http://www.youtube.com/watch?v=gBWPf1BWtkw</a></p>
<p>I thought I was the only person who read Code Complete 2 aloud to put their kids to sleep!</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Software_Executive_Summit_2008_Rapidly_Approaching/?blogid=23485">
  <title>Software Executive Summit 2008 Rapidly Approaching</title>
  <link>https://www.construx.com/10x_Software_Development/Software_Executive_Summit_2008_Rapidly_Approaching/?blogid=23485</link>
  <description><![CDATA[<p>After Labor Day most of my focus goes into our annual <a target="_self" href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">Software Executive Summit</a>. We are now in the final registration period -- with a $1000 public seminar voucher bonus for people who register by September 15.</p>
<p>I'm very excited about the speaker lineup this year. In addition to me, Martin Fowler, and Ken Schwaber, we have several very interesting industry speakers.</p>
<p>Mike Morrissey is VP Infrastructure at RIM, where he's responsible for the software that keeps all the Blackberries running. Blackberries have experienced meteoric growth, and Mike's going to talk about managing that.</p>
<p>Travis McElfresh is VP Technology at MSNBC.com, where he's managed to improve quality and productivity in a 24x7 news organization--and improve morale at the same time.</p>
<p>Matt Peloquin, Construx's CTO, is going to talk about our experiences doing technical evaluations with numerous companies over the past several years. His talk title is "Lessons from the Software Wild," which I think people will find appropriate once they hear what he has to say.</p>
<p>I hope you'll be able to attend, or that you'll let the executives in your organization know about this unique event. I've appended the official event email below.</p>
<p>Steve</p>
<div class="grayBox"><h3>Construx Software</h3>
<h4>Executive Summit 2008</h4>
<p><strong>If you've been waiting to register for Construx's 2008 Executive Summit, this is the time!</strong></p>
<p>A few seats still remain. <strong><a href="/Summit_Registration/">Register </a>by September 15, 2008 </strong>and receive a $1000 voucher for</p>
<p>Construx's <a href="/Seminars/?dm=0">public seminars</a>, usable by you or anyone on your staff.</p>
<strong>A rare opportunity for Top Software Executives to explore Software Development challenges and solutions with a Highly Select Group of Executive Peers.</strong> <p>The fifth annual Software Executive Summit provides a forum for Top Software Executives to compare, evaluate, and improve their Software Development practices and strategies at the Enterprise Level. Through stimulating <a href="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" title="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886">keynote addresses</a> and invigorating <a href="/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887" title="/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887">small group discussions</a>, participants will develop new insights into their organizations and discover innovative solutions. </p>
<div class="whiteBox"><p>For the past three years, more than 99% of Summit attendees said they would attend again within two years, and 100% said they would recommend the event to others.</p>
<span><a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more &gt;</a></span>  </div>
<p>Attendees at past Summits have reported they find the <a href="/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887">direct discussions with executive peers</a> to be the most valuable part of the Summit. Previous attendees have represented top companies including Microsoft, Intel, Intuit, Symantec, EMC, Adobe, Expedia, Disney, Pixar, GE, Honeywell, General Dynamics, Tektronix, Costco, Nordstrom, Eli Lilly, MetLife, Thomson Financial, ADP, Bank of America, and many others. </p>
<span>Keynote Addresses </span>   <a href="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" title="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886">more &gt;</a> <ul>
<li><strong>Ken Schwaber, </strong>"Scaling Scrum." Schwaber is the co-creator of Scrum and was one of the early, visible proponents of lightweight, adaptive processes for software development.</li>
<li><strong>Martin Fowler, </strong>"Cultivating Great Architects and Designers." Fowler is Chief Scientist at Thoughtworks and author of <em>Refactoring</em>, <em>UML Distilled</em>, and other software development best sellers.  </li>
<li><strong>Mike Morrissey, </strong>"Managing in a Hyper-Growth Environment." Morrissey is the VP Infrastructure at RIM, the Blackberry company where he oversees the Blackberry infrastructure. </li>
<li><strong>Travis McElfresh</strong>, "Driving Employee Satisfaction, Morale, and Productivity at MSNBC.com." McElfresh is VP Technology at MSNBC.com. </li>
<li><strong>Matt Peloquin, </strong>"Technical Lessons From the Software Wild." Peloquin is CTO of Construx Software and oversees technical software evaluations for Construx clients. </li>
<li><strong>Steve McConnell, </strong>"Secrets of World Class Software Organizations." McConnell is author of <em>Code Complete</em>, <em>Software Estimation</em>, <em>Rapid Development</em>, and other software industry classics. </li>
</ul>
<span>Benefits of Attending </span>   <a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more &gt;</a> <ul>
<li>Share and compare experiences with other software development executives </li>
<li>Explore issues with industry experts including Steve McConnell, Martin Fowler, Ken Schwaber, and other Summit attendees </li>
<li>Attend monthly Seattle-area ECSE meetings or dial-in meetings for two years </li>
<li>Receive Construx's monthly <em>Software Executive Report </em>for two years </li>
</ul>
<div class="whiteBox"><p>Speakers were world class. I kept saying to myself, 'Wow ... this is something I need to bring back and teach my team.'" <br />-- Bob Cymbalski, Director, Engineering, Motricity    <a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more comments &gt;</a></p>
</div>
<span>Discussion Topics </span>   <a target="_blank" href="/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887">more &gt;</a> <ul>
<li>Managing Global Development </li>
<li>Navigating the Planning Cycle </li>
<li>Improving Productivity </li>
<li>Upgrading Your SDLC </li>
<li>Successful Leadership in Software Development </li>
<li>Guru Management: Special Issues in Managing Technical Personnel </li>
<li>Lessons Learned in Agile Development </li>
<li>Driving Improved Technical Practices </li>
</ul>
<span>Who Should Attend</span> <p>At past Summits 95% of participants have held titles of VP, CTO, Director, or higher. Most participants will oversee the activities of at least 50-100 software personnel. All participants should have multi-project responsibility for software development at the organization or enterprise level.</p>
<div class="whiteBox">Construx continues to provide a unique, relevant opportunity to interact as a peer group. Absolutely best conference, hands down." <br />-- David Spokane, Director, Software Engineering Office, EMC   <a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more comments &gt;</a>  </div>
<span>Logistics</span>   <a href="/Thought_Leadership/Events/Software_Executive_Summit/Logistics/?id=14852">more &gt;</a> <p>The Summit will be held in downtown Seattle at the Grand Hyatt, October 27-29, 2008. Participation fee is $3495. Attendees will be assigned to discussion groups based on profiles submitted prior to the Summit. Reservations will be accepted on a first-come, first-served basis. <a href="/Summit_Registration/">Reserve your spot today</a>!</p>
<p class="registration">Registration Bonus!</p>
<br /><p><strong><a href="/Seminar_Registration_Step1/">Register</a></strong> <strong>by September 15, 2008 </strong>and receive a $1000 voucher for Construx's <a href="/Seminars/?dm=0">public seminars</a>, usable by you or anyone on your staff. </p>
<p>Please forward this email to others who might be interested in this event. </p>
<p><a href="/Summit_Registration/">http://www.construx.com/summit/</a>  </p>
<p><img width="149" height="38" border="0" src="/uploadedimages/cxlogo-xxsmall.jpg" /></p>
</div>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-09-09T11:22:55Z</dc:date>
  <content:encoded><![CDATA[<p>After Labor Day most of my focus goes into our annual <a target="_self" href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">Software Executive Summit</a>. We are now in the final registration period -- with a $1000 public seminar voucher bonus for people who register by September 15.</p>
<p>I'm very excited about the speaker lineup this year. In addition to me, Martin Fowler, and Ken Schwaber, we have several very interesting industry speakers.</p>
<p>Mike Morrissey is VP Infrastructure at RIM, where he's responsible for the software that keeps all the Blackberries running. Blackberries have experienced meteoric growth, and Mike's going to talk about managing that.</p>
<p>Travis McElfresh is VP Technology at MSNBC.com, where he's managed to improve quality and productivity in a 24x7 news organization--and improve morale at the same time.</p>
<p>Matt Peloquin, Construx's CTO, is going to talk about our experiences doing technical evaluations with numerous companies over the past several years. His talk title is "Lessons from the Software Wild," which I think people will find appropriate once they hear what he has to say.</p>
<p>I hope you'll be able to attend, or that you'll let the executives in your organization know about this unique event. I've appended the official event email below.</p>
<p>Steve</p>
<div class="grayBox"><h3>Construx Software</h3>
<h4>Executive Summit 2008</h4>
<p><strong>If you've been waiting to register for Construx's 2008 Executive Summit, this is the time!</strong></p>
<p>A few seats still remain. <strong><a href="https://www.construx.com/Summit_Registration/">Register </a>by September 15, 2008 </strong>and receive a $1000 voucher for</p>
<p>Construx's <a href="https://www.construx.com/Seminars/?dm=0">public seminars</a>, usable by you or anyone on your staff.</p>
<strong>A rare opportunity for Top Software Executives to explore Software Development challenges and solutions with a Highly Select Group of Executive Peers.</strong> <p>The fifth annual Software Executive Summit provides a forum for Top Software Executives to compare, evaluate, and improve their Software Development practices and strategies at the Enterprise Level. Through stimulating <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" title="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886">keynote addresses</a> and invigorating <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887" title="/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887">small group discussions</a>, participants will develop new insights into their organizations and discover innovative solutions. </p>
<div class="whiteBox"><p>For the past three years, more than 99% of Summit attendees said they would attend again within two years, and 100% said they would recommend the event to others.</p>
<span><a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more &gt;</a></span>  </div>
<p>Attendees at past Summits have reported they find the <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887">direct discussions with executive peers</a> to be the most valuable part of the Summit. Previous attendees have represented top companies including Microsoft, Intel, Intuit, Symantec, EMC, Adobe, Expedia, Disney, Pixar, GE, Honeywell, General Dynamics, Tektronix, Costco, Nordstrom, Eli Lilly, MetLife, Thomson Financial, ADP, Bank of America, and many others. </p>
<span>Keynote Addresses </span>   <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" title="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886">more &gt;</a> <ul>
<li><strong>Ken Schwaber, </strong>"Scaling Scrum." Schwaber is the co-creator of Scrum and was one of the early, visible proponents of lightweight, adaptive processes for software development.</li>
<li><strong>Martin Fowler, </strong>"Cultivating Great Architects and Designers." Fowler is Chief Scientist at Thoughtworks and author of <em>Refactoring</em>, <em>UML Distilled</em>, and other software development best sellers.  </li>
<li><strong>Mike Morrissey, </strong>"Managing in a Hyper-Growth Environment." Morrissey is the VP Infrastructure at RIM, the Blackberry company where he oversees the Blackberry infrastructure. </li>
<li><strong>Travis McElfresh</strong>, "Driving Employee Satisfaction, Morale, and Productivity at MSNBC.com." McElfresh is VP Technology at MSNBC.com. </li>
<li><strong>Matt Peloquin, </strong>"Technical Lessons From the Software Wild." Peloquin is CTO of Construx Software and oversees technical software evaluations for Construx clients. </li>
<li><strong>Steve McConnell, </strong>"Secrets of World Class Software Organizations." McConnell is author of <em>Code Complete</em>, <em>Software Estimation</em>, <em>Rapid Development</em>, and other software industry classics. </li>
</ul>
<span>Benefits of Attending </span>   <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more &gt;</a> <ul>
<li>Share and compare experiences with other software development executives </li>
<li>Explore issues with industry experts including Steve McConnell, Martin Fowler, Ken Schwaber, and other Summit attendees </li>
<li>Attend monthly Seattle-area ECSE meetings or dial-in meetings for two years </li>
<li>Receive Construx's monthly <em>Software Executive Report </em>for two years </li>
</ul>
<div class="whiteBox"><p>Speakers were world class. I kept saying to myself, 'Wow ... this is something I need to bring back and teach my team.'" <br />-- Bob Cymbalski, Director, Engineering, Motricity    <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more comments &gt;</a></p>
</div>
<span>Discussion Topics </span>   <a target="_blank" href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887">more &gt;</a> <ul>
<li>Managing Global Development </li>
<li>Navigating the Planning Cycle </li>
<li>Improving Productivity </li>
<li>Upgrading Your SDLC </li>
<li>Successful Leadership in Software Development </li>
<li>Guru Management: Special Issues in Managing Technical Personnel </li>
<li>Lessons Learned in Agile Development </li>
<li>Driving Improved Technical Practices </li>
</ul>
<span>Who Should Attend</span> <p>At past Summits 95% of participants have held titles of VP, CTO, Director, or higher. Most participants will oversee the activities of at least 50-100 software personnel. All participants should have multi-project responsibility for software development at the organization or enterprise level.</p>
<div class="whiteBox">Construx continues to provide a unique, relevant opportunity to interact as a peer group. Absolutely best conference, hands down." <br />-- David Spokane, Director, Software Engineering Office, EMC   <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841">more comments &gt;</a>  </div>
<span>Logistics</span>   <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Logistics/?id=14852">more &gt;</a> <p>The Summit will be held in downtown Seattle at the Grand Hyatt, October 27-29, 2008. Participation fee is $3495. Attendees will be assigned to discussion groups based on profiles submitted prior to the Summit. Reservations will be accepted on a first-come, first-served basis. <a href="https://www.construx.com/Summit_Registration/">Reserve your spot today</a>!</p>
<p class="registration">Registration Bonus!</p>
<br /><p><strong><a href="https://www.construx.com/Seminar_Registration_Step1/">Register</a></strong> <strong>by September 15, 2008 </strong>and receive a $1000 voucher for Construx's <a href="https://www.construx.com/Seminars/?dm=0">public seminars</a>, usable by you or anyone on your staff. </p>
<p>Please forward this email to others who might be interested in this event. </p>
<p><a href="https://www.construx.com/Summit_Registration/">http://www.construx.com/summit/</a>  </p>
<p><img width="149" height="38" border="0" src="https://www.construx.com/uploadedimages/cxlogo-xxsmall.jpg" /></p>
</div>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Agile_Software__Business_Impact_and_Business_Benefits/?blogid=23485">
  <title>Agile Software: Business Impact and Business Benefits</title>
  <link>https://www.construx.com/10x_Software_Development/Agile_Software__Business_Impact_and_Business_Benefits/?blogid=23485</link>
  <description><![CDATA[<p>Agile literature focuses on the benefits Agile provides to developers and development teams, with a secondary focus on the benefits Agile provides customers. Much of the Agile literature also asserts that Agile practices are more responsive to business needs.</p>
<p>Many businesses are embracing Agile and seeing significant benefits. Many other businesses are embracing Agile and regretting it. Why the different results?</p>
<p><strong>A Cautionary Tale of Agile Development</strong></p>
<p>At a previous <a href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">Construx Software Executive Summit</a>, one of the executives attending the event told the following story. In 2001 a major division of a well-known software company embraced Extreme Programming (XP) as the development approach they would use for a product development initiative involving a technical staff of about 200 people. The development team followed XP closely. They developed their software in short iterations. They sought close collaboration with customers and customer representatives. They kept quality high at all times. They had working software to demonstrate throughout the project. They were highly responsive to customer inputs and agile in changing direction based on those inputs.</p>
<p>Development went on for about two years. While the team was being highly responsive to customer input, that wasn't good enough. The cumulative total of its work was not converging to anything resembling a saleable product. Eventually the company concluded that the team was never going to produce a product, at which point most of the 200 people were laid off and the company reported a $50 million loss on the project.</p>
<p><strong>What Went Wrong?</strong></p>
<p>For this business some of the specific practices in XP were simply not well matched to the company's business needs. In particular, XP's emphasis at that time on defining requirements only one iteration at a time didn't provide for an overall product vision (aka roadmap, aka product definition) that would result in a compelling product. Many of the other XP practices probably worked fine. But the lack of comprehensive requirements combined with an emphasis on "embracing change" just enabled the company to move more quickly in the wrong direction. For this company, contrary to the Agile Manifesto, "responding to change" was not more important than "following a plan."</p>
<p>This is representative of other misfires we've seen in implementing Agile development. Technical leadership assumes Agile equals "good." But Agile doesn't equal "good"; it equals "more suitable for some circumstances" and "less suitable for other circumstances." Applying Agile in the wrong circumstances can cause major problems. A related issue is that "Agile" has become quite a large umbrella that covers dozens of specific practices. The more time goes by, the less useful the Agile buzzword becomes and the more meaningful it is to discuss specific practices instead.</p>
<p><strong>Agility and Predictability</strong></p>
<p>True agility--which means adopting a posture that allows you to respond rapidly to changing market conditions and customer demands--conflicts with predictability. Some businesses value agility, but many businesses value predictability more than they value the ability to change direction quickly. For those businesses, becoming more Agile is a second level consideration; the first level consideration is how to become more predictable. This was the problem that the company in the cautionary tale experienced. They got flexibility, but what they really needed was predictability.</p>
<p>Whether agility or predictability is more important depends both on what a business's customers are requesting and on how long-range the request is. Customers say, "I want this capability." In an ideal world, the business will be able to say, "OK, here it is right away." Being able to say "here it is right away" is what agility is all about.</p>
<p>Sometimes the work is too big to say "Here it is right away." In those cases, the business needs to say something like, "Sure, we can go that direction. This is a big piece of work and we can have that ready for you 22 weeks from now." That is where predictability starts to matter. When you say you'll deliver something in 22 weeks, you'd like to know that you really will deliver it in 22 weeks.</p>
<span>Agility and Multi-Site Development</span><p>Multi-site development has become increasingly common during the same period that Agile development has been on the rise, and these two trends are not always compatible. When a client tells me "we want our distributed teams to be more Agile," warning buzzers start going off in my head. Of the dozens of practices that can be called "Agile" some will help multi-location teams and some will undermine them.</p>
<p>Agile development commonly includes the major focuses of reducing paperwork, increasing the frequency and bandwidth of face-to-face communications, and emphasizing informal, incidental communication. Those focuses inherently run counter to spreading people out geographically, which reduces face-to-face time, reduces incidental communication, and in general reduces communication bandwidth. So some Agile practices are not a good match for multi-site teams.</p>
<p>Many other Agile practices can work fine in multi-site teams. For example the practice of breaking work up into smaller chunks and delivering it more often than has been done traditionally is an Agile practice that provides discipline that's valuable to multi-site teams. Other valuable practices include daily builds or continuous builds, daily stand-up meetings, automated regression testing, developer unit testing, time box development, small cross-functional teams, intensive short-term planning, involved coach/manager, and frequent retrospectives, just to name a few.</p>
<p><strong>Introducing Agile Practices to a Business</strong></p>
<p>When we introduce Agile to a new company the first thing we do is make sure that Agile software development is really what the business needs. Because "Agile" has become an all-encompassing term, we encourage our clients to be specific about what benefits they are looking for. If Agile turns out not to be what the business really needs, we work on building strengths in other areas.</p>
<p>With companies that do have a business justification for Agile, we look at specific practices:</p>
<ul>
<li>We identify which practices will best address the areas that are experiencing the most pain.</li>
<li>We determine which specific Agile practices are going to provide the most bang for the buck for that company.</li>
<li>We assess which Agile practices have the highest chances of being accepted within that company's existing technical culture and business culture.</li>
</ul>
<p>The software industry has a long history of taking good, incremental improvements in development practices and then overextending them. Agile development is not the exception to the rule. In many cases, Agile development practices can help companies raise the bar on their software development efforts. By focusing on business needs first, and technical solutions second, companies can avoid Agile becoming the proverbial "solution in search of a problem."</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-07-29T12:38:00Z</dc:date>
  <content:encoded><![CDATA[<p>Agile literature focuses on the benefits Agile provides to developers and development teams, with a secondary focus on the benefits Agile provides customers. Much of the Agile literature also asserts that Agile practices are more responsive to business needs.</p>
<p>Many businesses are embracing Agile and seeing significant benefits. Many other businesses are embracing Agile and regretting it. Why the different results?</p>
<p><strong>A Cautionary Tale of Agile Development</strong></p>
<p>At a previous <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">Construx Software Executive Summit</a>, one of the executives attending the event told the following story. In 2001 a major division of a well-known software company embraced Extreme Programming (XP) as the development approach they would use for a product development initiative involving a technical staff of about 200 people. The development team followed XP closely. They developed their software in short iterations. They sought close collaboration with customers and customer representatives. They kept quality high at all times. They had working software to demonstrate throughout the project. They were highly responsive to customer inputs and agile in changing direction based on those inputs.</p>
<p>Development went on for about two years. While the team was being highly responsive to customer input, that wasn't good enough. The cumulative total of its work was not converging to anything resembling a saleable product. Eventually the company concluded that the team was never going to produce a product, at which point most of the 200 people were laid off and the company reported a $50 million loss on the project.</p>
<p><strong>What Went Wrong?</strong></p>
<p>For this business some of the specific practices in XP were simply not well matched to the company's business needs. In particular, XP's emphasis at that time on defining requirements only one iteration at a time didn't provide for an overall product vision (aka roadmap, aka product definition) that would result in a compelling product. Many of the other XP practices probably worked fine. But the lack of comprehensive requirements combined with an emphasis on "embracing change" just enabled the company to move more quickly in the wrong direction. For this company, contrary to the Agile Manifesto, "responding to change" was not more important than "following a plan."</p>
<p>This is representative of other misfires we've seen in implementing Agile development. Technical leadership assumes Agile equals "good." But Agile doesn't equal "good"; it equals "more suitable for some circumstances" and "less suitable for other circumstances." Applying Agile in the wrong circumstances can cause major problems. A related issue is that "Agile" has become quite a large umbrella that covers dozens of specific practices. The more time goes by, the less useful the Agile buzzword becomes and the more meaningful it is to discuss specific practices instead.</p>
<p><strong>Agility and Predictability</strong></p>
<p>True agility--which means adopting a posture that allows you to respond rapidly to changing market conditions and customer demands--conflicts with predictability. Some businesses value agility, but many businesses value predictability more than they value the ability to change direction quickly. For those businesses, becoming more Agile is a second level consideration; the first level consideration is how to become more predictable. This was the problem that the company in the cautionary tale experienced. They got flexibility, but what they really needed was predictability.</p>
<p>Whether agility or predictability is more important depends both on what a business's customers are requesting and on how long-range the request is. Customers say, "I want this capability." In an ideal world, the business will be able to say, "OK, here it is right away." Being able to say "here it is right away" is what agility is all about.</p>
<p>Sometimes the work is too big to say "Here it is right away." In those cases, the business needs to say something like, "Sure, we can go that direction. This is a big piece of work and we can have that ready for you 22 weeks from now." That is where predictability starts to matter. When you say you'll deliver something in 22 weeks, you'd like to know that you really will deliver it in 22 weeks.</p>
<span>Agility and Multi-Site Development</span><p>Multi-site development has become increasingly common during the same period that Agile development has been on the rise, and these two trends are not always compatible. When a client tells me "we want our distributed teams to be more Agile," warning buzzers start going off in my head. Of the dozens of practices that can be called "Agile" some will help multi-location teams and some will undermine them.</p>
<p>Agile development commonly includes the major focuses of reducing paperwork, increasing the frequency and bandwidth of face-to-face communications, and emphasizing informal, incidental communication. Those focuses inherently run counter to spreading people out geographically, which reduces face-to-face time, reduces incidental communication, and in general reduces communication bandwidth. So some Agile practices are not a good match for multi-site teams.</p>
<p>Many other Agile practices can work fine in multi-site teams. For example the practice of breaking work up into smaller chunks and delivering it more often than has been done traditionally is an Agile practice that provides discipline that's valuable to multi-site teams. Other valuable practices include daily builds or continuous builds, daily stand-up meetings, automated regression testing, developer unit testing, time box development, small cross-functional teams, intensive short-term planning, involved coach/manager, and frequent retrospectives, just to name a few.</p>
<p><strong>Introducing Agile Practices to a Business</strong></p>
<p>When we introduce Agile to a new company the first thing we do is make sure that Agile software development is really what the business needs. Because "Agile" has become an all-encompassing term, we encourage our clients to be specific about what benefits they are looking for. If Agile turns out not to be what the business really needs, we work on building strengths in other areas.</p>
<p>With companies that do have a business justification for Agile, we look at specific practices:</p>
<ul>
<li>We identify which practices will best address the areas that are experiencing the most pain.</li>
<li>We determine which specific Agile practices are going to provide the most bang for the buck for that company.</li>
<li>We assess which Agile practices have the highest chances of being accepted within that company's existing technical culture and business culture.</li>
</ul>
<p>The software industry has a long history of taking good, incremental improvements in development practices and then overextending them. Agile development is not the exception to the rule. In many cases, Agile development practices can help companies raise the bar on their software development efforts. By focusing on business needs first, and technical solutions second, companies can avoid Agile becoming the proverbial "solution in search of a problem."</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/New_Software_Executive_Summit_Speaker/?blogid=23485">
  <title>New Software Executive Summit Speaker</title>
  <link>https://www.construx.com/10x_Software_Development/New_Software_Executive_Summit_Speaker/?blogid=23485</link>
  <description><![CDATA[<p>I'm pleased to announce that we've added a new speaker to our already-stellar speaker lineup for this year's <a href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">Software Executive Summit</a>. Mike Morrissey, VP of Infrastructure at RIM (the BlackBerry company), will be giving a talk about Managing in a Hyper-Growth Environment (<a href="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" target="_blank">more details</a>). Here's the talk description:</p>
<p>When your business is in a hyper-growth phase just keeping pace with change can be a full-time job. Research In Motion has seen the number of BlackBerry subscribers double every year, with 16 million subscribers at the end of first quarter FY09. To support this tremendous growth RIM’s Infrastructure Software Team has more than tripled over the last three years, presenting significant challenges related to team growth, cultural evolution, scalability, availability, feature growth, distribution and process maturity. In his presentation, Mike Morrissey will discuss these challenges and offer insight into staying ahead of the growth curve in a hyper-growth environment.</p>
<p>Mike joins our other speakers:</p>
<ul>
<li>Martin Fowler, "Cultivating Great Architects and Designers"</li>
<li>Ken Schwaber, "Scaling Scrum"</li>
<li>Travis McElfresh, VP Technology, MSNBC.com, "Driving Employee Satisfaction, Morale, and Productivity at MSNBC.com"</li>
<li>Matt Peloquin, CTO, Construx Software, "Technical Lessons from the Software Wild"</li>
<li>Steve McConnell, "Secrets of World Class Software Organizations"</li>
</ul>
<p>For more details on this year's Software Executive Summit, please visit <a href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">www.construx.com/summit/</a>.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-07-28T15:11:00Z</dc:date>
  <content:encoded><![CDATA[<p>I'm pleased to announce that we've added a new speaker to our already-stellar speaker lineup for this year's <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">Software Executive Summit</a>. Mike Morrissey, VP of Infrastructure at RIM (the BlackBerry company), will be giving a talk about Managing in a Hyper-Growth Environment (<a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" target="_blank">more details</a>). Here's the talk description:</p>
<p>When your business is in a hyper-growth phase just keeping pace with change can be a full-time job. Research In Motion has seen the number of BlackBerry subscribers double every year, with 16 million subscribers at the end of first quarter FY09. To support this tremendous growth RIM’s Infrastructure Software Team has more than tripled over the last three years, presenting significant challenges related to team growth, cultural evolution, scalability, availability, feature growth, distribution and process maturity. In his presentation, Mike Morrissey will discuss these challenges and offer insight into staying ahead of the growth curve in a hyper-growth environment.</p>
<p>Mike joins our other speakers:</p>
<ul>
<li>Martin Fowler, "Cultivating Great Architects and Designers"</li>
<li>Ken Schwaber, "Scaling Scrum"</li>
<li>Travis McElfresh, VP Technology, MSNBC.com, "Driving Employee Satisfaction, Morale, and Productivity at MSNBC.com"</li>
<li>Matt Peloquin, CTO, Construx Software, "Technical Lessons from the Software Wild"</li>
<li>Steve McConnell, "Secrets of World Class Software Organizations"</li>
</ul>
<p>For more details on this year's Software Executive Summit, please visit <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">www.construx.com/summit/</a>.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Software_Executive_Summit_Details_Announced;_Early_Registration_Incentive/?blogid=23485">
  <title>Software Executive Summit Details Announced; Early Registration Incentive</title>
  <link>https://www.construx.com/10x_Software_Development/Software_Executive_Summit_Details_Announced;_Early_Registration_Incentive/?blogid=23485</link>
  <description><![CDATA[<p>I am pleased to officially announce the details of the <a href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">2008 Software Executive Summit</a>, to be held October 27-29, 2008 in Seattle, Washington.</p>
<p>The Summit provides a rare opportunity for top software executives to compare software development challenges and solutions in a small-group-discussion format. Their discussions are punctuated by thought-provoking keynote addresses by <strong>Martin Fowler</strong>, <strong>Ken Schwaber</strong>, <strong>Steve McConnell</strong>, and others.</p>
<p>For the past three years, 99.5% of Summit attendees said they would attend again within two years, and 100% said they would recommend the event to others.  <em><a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank">See more comments...</a></em></p>
<div class="grayBox">"Speakers were world class. I kept saying to myself, 'Wow ... this is something I need to bring back and teach my team.'" <br />--Bob Cymbalski, Director, Engineering, Motricity    <a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p><strong>Early Registration Incentive</strong></p>
<p>Attendees who register by June 15 will receive 2007 Summit pricing of $3000 (goes up to $3495 after June 15) plus a credit of $2000 toward <a href="/Seminars/?dm=0" target="_blank">Construx's public seminars</a>. Space is limited, so <strong><a href="/Summit_Registration/" target="_blank">Register Now!</a></strong></p>
<div class="grayBox">"Terrific conference. Great networking event. Being able to meet and learn from all the other software execs is a great opportunity. I've really enjoyed the conference. It's one of the few that I reserve a year in advance." -- Peter Scott, VP of Engineering, GeoMagic    <a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p><strong>Who Should Attend</strong></p>
<p>At past Summits, 95% of participants have held titles of VP, CTO, Director, or higher. All participants have multi-project responsibility for software development at the organization or enterprise level.</p>
<p><strong>Keynotes</strong></p>
<p>Here are more details on the keynote talks this year:</p>
<ul>
<li>Steve McConnell, author of Software Estimation and Code Complete, “Secrets of World Class Software Organizations”</li>
<li>Martin Fowler, author of Refactoring and Patterns of Enterprise Application Architecture, “Cultivating Great Architects and Designers”</li>
<li>Ken Schwaber, co-creator of Scrum, “Scaling Scrum”</li>
<li>Travis McElfresh, VP Technology, “Driving Employee Satisfaction, Morale, and Productivity at MSNBC.com”</li>
<li>Matt Peloquin, CTO, Construx Software, “Technical Lessons from the Software Wild”</li>
<li><a href="/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" target="_blank"><em>See more details</em></a></li>
</ul>
<div class="grayBox">"Construx continues to provide a unique, relevant opportunity to interact as a peer group. Absolutely best conference, hands down." -- David Spokane, Director, Software Engineering Office, EMC    <a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p><strong>Discussion Topics</strong></p>
<p>Here is a preliminary list of topics for the Summit's small group discussions.</p>
<ul>
<li>Upgrading Your SDLC</li>
<li>Successful Leadership in Software Development</li>
<li>Managing Global Development</li>
<li>Guru Management: Special Issues in Managing Technical Personnel</li>
<li>Lessons Learned in Agile Development</li>
<li>Navigating the Planning Cycle</li>
<li>Driving Improved Technical Practices</li>
<li>Improving Productivity</li>
<li><em><a href="/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887" target="_blank">See more details</a></em></li>
</ul>
<div class="grayBox">"The Summit is the only conference of its kind. It is a unique blend of presentations and small group discussions with high caliber attendees. Many tools, ideas, and useful practices were discussed at a rapid pace. Every software exec should attend every year."  --John Colton, VP Engineering, Application Security, Inc.    <a href="/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p>For more details on the Summit, see the <a href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">Summit home page</a> or <a href="/Summit_Registration/" target="_blank">register now</a>!</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-06-02T12:10:00Z</dc:date>
  <content:encoded><![CDATA[<p>I am pleased to officially announce the details of the <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">2008 Software Executive Summit</a>, to be held October 27-29, 2008 in Seattle, Washington.</p>
<p>The Summit provides a rare opportunity for top software executives to compare software development challenges and solutions in a small-group-discussion format. Their discussions are punctuated by thought-provoking keynote addresses by <strong>Martin Fowler</strong>, <strong>Ken Schwaber</strong>, <strong>Steve McConnell</strong>, and others.</p>
<p>For the past three years, 99.5% of Summit attendees said they would attend again within two years, and 100% said they would recommend the event to others.  <em><a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank">See more comments...</a></em></p>
<div class="grayBox">"Speakers were world class. I kept saying to myself, 'Wow ... this is something I need to bring back and teach my team.'" <br />--Bob Cymbalski, Director, Engineering, Motricity    <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p><strong>Early Registration Incentive</strong></p>
<p>Attendees who register by June 15 will receive 2007 Summit pricing of $3000 (goes up to $3495 after June 15) plus a credit of $2000 toward <a href="https://www.construx.com/Seminars/?dm=0" target="_blank">Construx's public seminars</a>. Space is limited, so <strong><a href="https://www.construx.com/Summit_Registration/" target="_blank">Register Now!</a></strong></p>
<div class="grayBox">"Terrific conference. Great networking event. Being able to meet and learn from all the other software execs is a great opportunity. I've really enjoyed the conference. It's one of the few that I reserve a year in advance." -- Peter Scott, VP of Engineering, GeoMagic    <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p><strong>Who Should Attend</strong></p>
<p>At past Summits, 95% of participants have held titles of VP, CTO, Director, or higher. All participants have multi-project responsibility for software development at the organization or enterprise level.</p>
<p><strong>Keynotes</strong></p>
<p>Here are more details on the keynote talks this year:</p>
<ul>
<li>Steve McConnell, author of Software Estimation and Code Complete, “Secrets of World Class Software Organizations”</li>
<li>Martin Fowler, author of Refactoring and Patterns of Enterprise Application Architecture, “Cultivating Great Architects and Designers”</li>
<li>Ken Schwaber, co-creator of Scrum, “Scaling Scrum”</li>
<li>Travis McElfresh, VP Technology, “Driving Employee Satisfaction, Morale, and Productivity at MSNBC.com”</li>
<li>Matt Peloquin, CTO, Construx Software, “Technical Lessons from the Software Wild”</li>
<li><a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Invited_Talks/?id=14886" target="_blank"><em>See more details</em></a></li>
</ul>
<div class="grayBox">"Construx continues to provide a unique, relevant opportunity to interact as a peer group. Absolutely best conference, hands down." -- David Spokane, Director, Software Engineering Office, EMC    <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p><strong>Discussion Topics</strong></p>
<p>Here is a preliminary list of topics for the Summit's small group discussions.</p>
<ul>
<li>Upgrading Your SDLC</li>
<li>Successful Leadership in Software Development</li>
<li>Managing Global Development</li>
<li>Guru Management: Special Issues in Managing Technical Personnel</li>
<li>Lessons Learned in Agile Development</li>
<li>Navigating the Planning Cycle</li>
<li>Driving Improved Technical Practices</li>
<li>Improving Productivity</li>
<li><em><a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Discussion_Topics/?id=14887" target="_blank">See more details</a></em></li>
</ul>
<div class="grayBox">"The Summit is the only conference of its kind. It is a unique blend of presentations and small group discussions with high caliber attendees. Many tools, ideas, and useful practices were discussed at a rapid pace. Every software exec should attend every year."  --John Colton, VP Engineering, Application Security, Inc.    <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit/Testimonials/?id=14841" target="_blank"><em>See more comments</em></a></div>
<p>For more details on the Summit, see the <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501" target="_blank">Summit home page</a> or <a href="https://www.construx.com/Summit_Registration/" target="_blank">register now</a>!</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/New_Software_Executive_Report_Available__Managing_Core_Development/?blogid=23485">
  <title>New Software Executive Report Available: Managing Core Development</title>
  <link>https://www.construx.com/10x_Software_Development/New_Software_Executive_Report_Available__Managing_Core_Development/?blogid=23485</link>
  <description><![CDATA[<p>One of my activities is moderating monthly discussion groups on software executive topics. The output of those meetings are captured in the Construx <em><a target="_self" href="/Thought_Leadership/Events/Construx_Software_Executive_Report/" title="Software Executive Reports">Software Executive Reports</a></em>. Our newest report, "Managing Core Development," is now available. Here's an excerpt:</p>
<div class="grayBox"><p>“Core” code always refers to code that is in some way more central than other product code. There are several variations on this theme:</p>
<ul>
<li>Most companies use “Core” to refer to architecture and functionality that multiple groups depend on. Reusable architectures, engines, toolsets, platforms, and frameworks are all examples of software that companies think of as core. </li>
<li>“Core” can refer to code that has exceptionally high quality requirements. </li>
<li>“Core” can refer to software components that provide competitive advantage. </li>
<li>One company defines Core as anything that affects the user experience. In this company’s business, this seems to be a variation on the theme of “Core” referring to code that has exceptionally high quality requirements and that provides competitive advantage. </li>
<li>In some cases, when data is of central importance to a company (e.g., product info for a web company), “Core” can also include data and data access. </li>
<li>Parts of the software that require governance are also sometimes considered to be core. </li>
</ul>
<p>Core is also known variously as "Platform," "Application Architecture," "Infrastructure", and other terms. </p>
<p><strong>Why Set up a Core Group?</strong></p>
The core team’s responsibility is to provide an easy path for other groups to use leading technology, i.e., make it easier for non-core developers to do a good job. Core groups are sometimes set up to tap into specialized skills of a group that are needed commonly across products. For example, a company that produces scientific software might have a core group that consists of science Ph.D.s who write core code to implement key company algorithms.When the core is managed well, it can accelerate development by providing tools that make other groups more efficient and effective in the short term, reduce the support burden in the long term, or both. <p><strong>Differences between Core Development and Product Development</strong></p>
Companies report several common differences between product development and core development:
<ul>
<li>Quality assurance tends to be more rigorous, because problems in the core can adversely affect multiple products. </li>
<li>Management of the core tends to be more rigorous for the same reason—schedule problems in the core can adverse affect schedules for multiple projects. </li>
<li>When the core changes, change impacts on all teams need to be considered. </li>
<li>Support obligations for the core are nettlesome. How long will the core group support each version of the code that it produces? Supporting several versions across each of several products can quickly become an unmanageable support obligation. <br />... </li>
</ul>
</div>
<p>For the rest of the report, see the listing of Construx <em><a href="/Thought_Leadership/Events/Construx_Software_Executive_Report/">Software Executive Reports</a> </em>or link to the specific report below. A free membership is required to view these reports. Here are some recent topics and links to their reports:</p>
<ul>
<li><a href="/uploadedFiles/Construx/Construx_Content/Blogs/Organizational Structures.pdf">Organizational Structures</a></li>
<li><a href="/uploadedFiles/Construx/Construx_Content/Blogs/Upgrading Your SDLC.pdf">Upgrading Your SDLC</a></li>
<li><a href="/uploadedFiles/Construx/Construx_Content/Blogs/Managing Global Development.pdf">Managing Global Development</a></li>
<li><a href="/uploadedFiles/Construx/Construx_Content/Blogs/Navigating the Planning Cycle.pdf">Navigating the Planning Cycle</a></li>
<li><a href="/uploadedFiles/Construx/Construx_Content/Blogs/Managing Core Development.pdf">Managing Core Development</a></li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-06-02T11:39:00Z</dc:date>
  <content:encoded><![CDATA[<p>One of my activities is moderating monthly discussion groups on software executive topics. The output of those meetings are captured in the Construx <em><a target="_self" href="https://www.construx.com/Thought_Leadership/Events/Construx_Software_Executive_Report/" title="Software Executive Reports">Software Executive Reports</a></em>. Our newest report, "Managing Core Development," is now available. Here's an excerpt:</p>
<div class="grayBox"><p>“Core” code always refers to code that is in some way more central than other product code. There are several variations on this theme:</p>
<ul>
<li>Most companies use “Core” to refer to architecture and functionality that multiple groups depend on. Reusable architectures, engines, toolsets, platforms, and frameworks are all examples of software that companies think of as core. </li>
<li>“Core” can refer to code that has exceptionally high quality requirements. </li>
<li>“Core” can refer to software components that provide competitive advantage. </li>
<li>One company defines Core as anything that affects the user experience. In this company’s business, this seems to be a variation on the theme of “Core” referring to code that has exceptionally high quality requirements and that provides competitive advantage. </li>
<li>In some cases, when data is of central importance to a company (e.g., product info for a web company), “Core” can also include data and data access. </li>
<li>Parts of the software that require governance are also sometimes considered to be core. </li>
</ul>
<p>Core is also known variously as "Platform," "Application Architecture," "Infrastructure", and other terms. </p>
<p><strong>Why Set up a Core Group?</strong></p>
The core team’s responsibility is to provide an easy path for other groups to use leading technology, i.e., make it easier for non-core developers to do a good job. Core groups are sometimes set up to tap into specialized skills of a group that are needed commonly across products. For example, a company that produces scientific software might have a core group that consists of science Ph.D.s who write core code to implement key company algorithms.When the core is managed well, it can accelerate development by providing tools that make other groups more efficient and effective in the short term, reduce the support burden in the long term, or both. <p><strong>Differences between Core Development and Product Development</strong></p>
Companies report several common differences between product development and core development:
<ul>
<li>Quality assurance tends to be more rigorous, because problems in the core can adversely affect multiple products. </li>
<li>Management of the core tends to be more rigorous for the same reason—schedule problems in the core can adverse affect schedules for multiple projects. </li>
<li>When the core changes, change impacts on all teams need to be considered. </li>
<li>Support obligations for the core are nettlesome. How long will the core group support each version of the code that it produces? Supporting several versions across each of several products can quickly become an unmanageable support obligation. <br />... </li>
</ul>
</div>
<p>For the rest of the report, see the listing of Construx <em><a href="https://www.construx.com/Thought_Leadership/Events/Construx_Software_Executive_Report/">Software Executive Reports</a> </em>or link to the specific report below. A free membership is required to view these reports. Here are some recent topics and links to their reports:</p>
<ul>
<li><a href="https://www.construx.com/uploadedFiles/Construx/Construx_Content/Blogs/Organizational Structures.pdf">Organizational Structures</a></li>
<li><a href="https://www.construx.com/uploadedFiles/Construx/Construx_Content/Blogs/Upgrading Your SDLC.pdf">Upgrading Your SDLC</a></li>
<li><a href="https://www.construx.com/uploadedFiles/Construx/Construx_Content/Blogs/Managing Global Development.pdf">Managing Global Development</a></li>
<li><a href="https://www.construx.com/uploadedFiles/Construx/Construx_Content/Blogs/Navigating the Planning Cycle.pdf">Navigating the Planning Cycle</a></li>
<li><a href="https://www.construx.com/uploadedFiles/Construx/Construx_Content/Blogs/Managing Core Development.pdf">Managing Core Development</a></li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Software_s_Classic_Mistakes--2008/?blogid=23485">
  <title>Software&#39;s Classic Mistakes--2008</title>
  <link>https://www.construx.com/10x_Software_Development/Software_s_Classic_Mistakes--2008/?blogid=23485</link>
  <description><![CDATA[<p>In 2007 my colleagues at Construx Software and I updated the list of classic mistakes from my 1996 book <em>Rapid Development</em>. Throughout 2007 we conducted a survey to determine the frequency and severity of these classic mistakes. In other words, we wanted to get a more quantitative sense of just how "classic" these classic mistakes are.</p>
<p>More than 500 people responded to the survey. The majority of them were involved with web and business systems. A significant minority were involved in shrink wrap/commercial systems, and about 10% were involved in embedded, system critical, systems, SaaS, or other kinds of software. About half the respondents were in lead/architect roles, about one-quarter in individual technical contributor roles, and the rest were in management or dual management/technical roles. The results are available in a white paper, "<a href="/classic/"> Software Development's Classic Mistakes 2008."</a> You will need a login our main web site to download the white paper. (The log in is free.)</p>
<span>Excerpts from the Classic Mistakes Survey</span><p>Based on the survey responses, we computed the approximate frequency of the mistakes surveyed. Here is an excerpt from the white paper that shows the approximate frequency of occurrence of the most common classic mistakes:</p>
<a href="/uploadedimages/image_6.png"></a><p><a href="/uploadedimages/image_6.png"></a><a href="/classic/"><img border="0" alt="approximate frequency of classic mistakes" src="/uploadedimages/image_9.png" /> </a></p>
<p>We also examined how severe the mistakes are when they occur. This excerpt from the white paper describes which mistakes produce <em>Catastrophic </em>or <em>Serious </em>consequences the most often:</p>
<p><a href="/classic/"></a><a href="/classic/"><img alt="image" src="/uploadedimages/image_10.png" /></a></p>
<p>Finally, we made an assessment of which classic mistakes are most damaging overall. We multiplied the approximate average frequency of each mistake times its average severity to arrive at a Mistake Exposure Index (MEI). The MEI ranges from 0 to 10, with 10 being the worst. Here is an excerpt from the white paper that shows the classic mistakes with the worst MEIs:</p>
<p><a href="/uploadedimages/image_12.png"></a><a href="/classic/"><img border="0" alt="image" src="/uploadedimages/image_17.png" /></a></p>
<p>Here is an excerpt that summarizes the average frequency and average severity of the mistakes with the highest MEIs:</p>
<p><a href="/uploadedimagesimage_14.png"></a><a href="/classic/"><img border="0" alt="image" src="/uploadedimages/image_18.png" /></a></p>
<span>Conclusions from the Classic Mistakes Survey</span><p>The raw survey results are interesting and so are some of the general trends.</p>
<p>One conclusion is that two of the mistakes added in 2008 (i.e., that weren't in my 1996 book <em>Rapid Development</em>) made the top 10:</p>
<ul>
<li>Confusing estimates with targets</li>
<li>Excessive multi-tasking </li>
</ul>
<p>This suggests that continued refinement of the classic mistakes list is worthwhile.</p>
<p>A second conclusion is that a few of the mistakes don't occur frequently enough or aren't severe enough when they do occur to really be considered "classic" mistakes:</p>
<p><a href="/uploadedimages/image_16.png"></a><a href="/classic/"><img border="0" src="/uploadedimages/image_19.png" /></a></p>
<p>A third conclusion is that many of the mistakes in the survey do indeed deserve to be called "classic" mistakes. I find it interesting that 8 of the top 10 mistakes in this year's report were listed in a book I published in 1996. If these mistakes were classic in 1996, they're even more classic 12 years later!</p>
<span>Final Thoughts</span><p>We'll be updating the classic mistakes survey in 2009, and we'd appreciate your input into the survey. You can<a href="https://vovici.com/wsb.dll/s/10431g2996e"> take the survey in about 30 minutes.</a> If you take the survey, we'll send you the results before they're made available to the general public.</p>
<p>Why do people keep making these mistakes? I'm interested to hear your thoughts.</p>
<span>Resources</span><ul>
<li><a href="/classic/">Software Development's Classic Mistakes 2008</a>" Construx white paper which includes the complete list of mistakes, descriptions of each mistake, and results from the survey for every classic mistake (login required). </li>
<li><a href="https://vovici.com/wsb.dll/s/10431g2996e">Classic Mistakes Survey</a>  </li>
<li>My <a title="executive presentation" href="/classic/" target="_self">executive presentation</a> on Classic Mistakes (login required) </li>
<li>My nominations for <a href="http://www.stevemcconnell.com/ieeesoftware/bp05.htm">Top 12 classic mistakes</a> in 1996 </li>
<li>Excerpt from <a href="http://www.stevemcconnell.com/rdenum.htm">Rapid Development on classic mistakes </a></li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-05-13T11:43:00Z</dc:date>
  <content:encoded><![CDATA[<p>In 2007 my colleagues at Construx Software and I updated the list of classic mistakes from my 1996 book <em>Rapid Development</em>. Throughout 2007 we conducted a survey to determine the frequency and severity of these classic mistakes. In other words, we wanted to get a more quantitative sense of just how "classic" these classic mistakes are.</p>
<p>More than 500 people responded to the survey. The majority of them were involved with web and business systems. A significant minority were involved in shrink wrap/commercial systems, and about 10% were involved in embedded, system critical, systems, SaaS, or other kinds of software. About half the respondents were in lead/architect roles, about one-quarter in individual technical contributor roles, and the rest were in management or dual management/technical roles. The results are available in a white paper, "<a href="https://www.construx.com/classic/"> Software Development's Classic Mistakes 2008."</a> You will need a login our main web site to download the white paper. (The log in is free.)</p>
<span>Excerpts from the Classic Mistakes Survey</span><p>Based on the survey responses, we computed the approximate frequency of the mistakes surveyed. Here is an excerpt from the white paper that shows the approximate frequency of occurrence of the most common classic mistakes:</p>
<a href="https://www.construx.com/uploadedimages/image_6.png"></a><p><a href="https://www.construx.com/uploadedimages/image_6.png"></a><a href="https://www.construx.com/classic/"><img border="0" alt="approximate frequency of classic mistakes" src="https://www.construx.com/uploadedimages/image_9.png" width="529" height="320" /> </a></p>
<p>We also examined how severe the mistakes are when they occur. This excerpt from the white paper describes which mistakes produce <em>Catastrophic </em>or <em>Serious </em>consequences the most often:</p>
<p><a href="https://www.construx.com/classic/"></a><a href="https://www.construx.com/classic/"><img alt="image" src="https://www.construx.com/uploadedimages/image_10.png" width="529" height="333" /></a></p>
<p>Finally, we made an assessment of which classic mistakes are most damaging overall. We multiplied the approximate average frequency of each mistake times its average severity to arrive at a Mistake Exposure Index (MEI). The MEI ranges from 0 to 10, with 10 being the worst. Here is an excerpt from the white paper that shows the classic mistakes with the worst MEIs:</p>
<p><a href="https://www.construx.com/uploadedimages/image_12.png"></a><a href="https://www.construx.com/classic/"><img border="0" alt="image" src="https://www.construx.com/uploadedimages/image_17.png" width="529" height="320" /></a></p>
<p>Here is an excerpt that summarizes the average frequency and average severity of the mistakes with the highest MEIs:</p>
<p><a href="https://www.construx.com/uploadedimagesimage_14.png"></a><a href="https://www.construx.com/classic/"><img border="0" alt="image" src="https://www.construx.com/uploadedimages/image_18.png" width="529" height="320" /></a></p>
<span>Conclusions from the Classic Mistakes Survey</span><p>The raw survey results are interesting and so are some of the general trends.</p>
<p>One conclusion is that two of the mistakes added in 2008 (i.e., that weren't in my 1996 book <em>Rapid Development</em>) made the top 10:</p>
<ul>
<li>Confusing estimates with targets</li>
<li>Excessive multi-tasking </li>
</ul>
<p>This suggests that continued refinement of the classic mistakes list is worthwhile.</p>
<p>A second conclusion is that a few of the mistakes don't occur frequently enough or aren't severe enough when they do occur to really be considered "classic" mistakes:</p>
<p><a href="https://www.construx.com/uploadedimages/image_16.png"></a><a href="https://www.construx.com/classic/"><img border="0" src="https://www.construx.com/uploadedimages/image_19.png" width="529" /></a></p>
<p>A third conclusion is that many of the mistakes in the survey do indeed deserve to be called "classic" mistakes. I find it interesting that 8 of the top 10 mistakes in this year's report were listed in a book I published in 1996. If these mistakes were classic in 1996, they're even more classic 12 years later!</p>
<span>Final Thoughts</span><p>We'll be updating the classic mistakes survey in 2009, and we'd appreciate your input into the survey. You can<a href="https://vovici.com/wsb.dll/s/10431g2996e"> take the survey in about 30 minutes.</a> If you take the survey, we'll send you the results before they're made available to the general public.</p>
<p>Why do people keep making these mistakes? I'm interested to hear your thoughts.</p>
<span>Resources</span><ul>
<li><a href="https://www.construx.com/classic/">Software Development's Classic Mistakes 2008</a>" Construx white paper which includes the complete list of mistakes, descriptions of each mistake, and results from the survey for every classic mistake (login required). </li>
<li><a href="https://vovici.com/wsb.dll/s/10431g2996e">Classic Mistakes Survey</a>  </li>
<li>My <a title="executive presentation" href="https://www.construx.com/classic/" target="_self">executive presentation</a> on Classic Mistakes (login required) </li>
<li>My nominations for <a href="http://www.stevemcconnell.com/ieeesoftware/bp05.htm">Top 12 classic mistakes</a> in 1996 </li>
<li>Excerpt from <a href="http://www.stevemcconnell.com/rdenum.htm">Rapid Development on classic mistakes </a></li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Measuring_Productivity_of_Individual_Programmers/?blogid=23485">
  <title>Measuring Productivity of Individual Programmers</title>
  <link>https://www.construx.com/10x_Software_Development/Measuring_Productivity_of_Individual_Programmers/?blogid=23485</link>
  <description><![CDATA[<P>My last couple of posts on <A class="" href="http://forums.construx.com/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx" mce_href="/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx">productivity variations among programmers</A> and the <A class="" href="http://forums.construx.com/blogs/stevemcc/archive/2008/03/31/chief-programmer-team-update.aspx" mce_href="/blogs/stevemcc/archive/2008/03/31/chief-programmer-team-update.aspx">Chief Programmer Team model</A> gave rise to some discussion about hazards of measuring software productivity at the individual programmer level. Software engineering studies normally measure productivity in terms of time to complete a specific task, or sometimes in terms of lines of code per effort-hour, staff-month, or some other measure of effort. Regardless of how you choose to measure productivity, there will be issues. </P><P><STRONG>Productivity in Lines of Code Per Staff Month</STRONG></P><P>Software design is a non-determinisitic activity, and researchers have found 10x variations in the code volume that different designer/developers will generate in response to a particular problem specification. If productivity is measured as lines of code per staff month (or equivalent), that implicitly suggests that the programmer who writes 10 times the amount of code to solve a particular problem is more productive than the programmer who writes 1 times the amount of code. That clearly is not right. Some commenters on my previous blog entry asserted that great programmers always write less code. My observation is that there’s a correlation there, but I wouldn’t make that statement that strongly. I would say that great programmers always write clear code, and that often translates to less code. Sometimes the clearest, simplest, and most obvious design takes a little more code than a design that’s more "clever"--in those cases I think the great programmer will write more code to avoid an overly clever design solution. Regardless, the idea that productivity can be measured cleanly as "lines of code per staff month" is subject to problems either way. </P><P>The problem with measuring productivity in terms of lines of code per staff month is the old Dilbert joke about Wally coding himself a minivan. If you measure productivity in terms of volume of code generated, some people will optimize for that measure, i.e., they will find ways to write more lines of code, even if more lines of code aren’t needed. This isn’t really a problem with this specific way of measuring productivity. This really just speaks to the management chestnut that "what gets measured gets done," so you need to be careful what you measure. </P><P><STRONG>Productivity in Function Points</STRONG></P><P>Some of the problems of "lines of code per staff month" can be avoided by measuring program size in function points rather than lines of code. Function points are a "synthetic" measure of program size in which inputs, outputs, queries, and files are counted to determine program size. An inefficient design/coding style won’t generate more function points, so function points aren’t subject to the same issues as lines of code. They are however subject to more practical issues, namely that to get an accurate count of function points you need the services of a certified function point counter (which most organizations don’t have available), and the mapping between how function points are counted and individual work packages is rough enough that it becomes impractical to use them to ascertain the productivity of individual programmers.</P><P><STRONG>What about Complexity?</STRONG></P><P>Managers frequently mention this issue:  "I always give my best programmer the most difficult/most complex sections of code to work on. His productivity on any measured basis might very well be low compared to programmers who get easier assignments, but my other programmers would take twice as long." Yep. That’s a legitimate issue too. </P><P><STRONG>Is There Any Way to Measure Individual Productivity? </STRONG></P><P>Difficulties like these have led many people to conclude that measuring individual productivity is so fraught with problems that no one should even try. I think it is possible to measure individual productivity meaningfully, as long as  you keep several key factors in mind.</P><P>1. Don’t expect any single dimensional measure of productivity to give you a very good picture of individual productivity. Think about all the statistics that are collected in sports. We can’t even use a single measure to determine how good a hitter in baseball is. We consider batting average, home runs, runs batted in, on-base percentage, and other factors--and then we still argue about what the numbers mean. If we can’t measure the "good hitter" using a simple measure, why would we expect we could measure something as complex as individual productivity using a simple measure? What we need to do instead is use a combination of measures, which collectively will give us insights into individual productivities. (Measures could include on-time task completion percentage, manager evaluation on a scale of 1-10, peer evaluation on a scale of 1-10, lines of code per staff month, defects reported per line of code, defects, fixed per line of code, bad fix injection rate, etc.)</P><P>2. Don’t expect any measures--whether single measures of a combination of measures--to support fine-grain discriminations in productivity among individuals. A good guideline is that measures of individual productivity give you questions to ask but they don’t give you the answers. Using measures of performance for, say, individual performance reviews is both bad management and bad statistics. </P><P>3. Remember that trends are usually more important than single-point measures. Measures of individual productivity tend to be far less useful in comparing one individual to another than they are in seeing how one individual is progressing over time. </P><P>4. Ask why you need to measure individual productivity at all. In a research setting, researchers need to measure productivity to assess the relative effectiveness of different techniques, and their use of these measures is subject to far fewer problems than measuring individual productivity on real projects is. In a real project environment, what do you want to use the measure(s) for? Performance reviews? Not a good idea for the reasons mentioned above. Task assignments? Most managers I talk with say they *know* who their star contributors are without measuring, and I believe them. Estimation? No, the variations caused by different design approaches, different task difficulty, and related factors make that an ineffective way to build up project estimates. </P><P>On real projects it’s hard to find a use for individual productivity measures that is both useful and statistically valid. In my experience, aside from research settings the attempt to measure individual performance arises most often from a desire to do something with the measurements that isn’t statistically valid. So while I see the value of measuring individual performance in research settings, I think it’s difficult to find cases in which the effort is justified on real projects. </P><P>(Measuring team productivity and organizational productivity is a different matter -- I"ll blog about that soon). </P>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-04-09T17:22:00Z</dc:date>
  <content:encoded><![CDATA[<p>My last couple of posts on <a href="https://www.construx.com/10x_Software_Development/Productivity_Variations_Among_Software_Developers_and_Teams__The_Origin_of_10x/">productivity variations among programmers</a> and the <a href="https://www.construx.com/10x_Software_Development/Chief_Programmer_Team_Update/">Chief Programmer Team model</a> gave rise to some discussion about hazards of measuring software productivity at the individual programmer level. Software engineering studies normally measure productivity in terms of time to complete a specific task, or sometimes in terms of lines of code per effort-hour, staff-month, or some other measure of effort. Regardless of how you choose to measure productivity, there will be issues.</p>
<span>Productivity in Lines of Code Per Staff Month</span><p>Software design is a non-deterministic activity, and researchers have found 10x variations in the code volume that different designer/developers will generate in response to a particular problem specification. If productivity is measured as lines of code per staff month (or equivalent), that implicitly suggests that the programmer who writes 10 times the amount of code to solve a particular problem is more productive than the programmer who writes 1 times the amount of code. That clearly is not right. Some commenters on my previous blog entry asserted that great programmers always write less code. My observation is that there’s a correlation there, but I wouldn’t make that statement that strongly. I would say that great programmers always write clear code, and that often translates to less code. Sometimes the clearest, simplest, and most obvious design takes a little more code than a design that’s more "clever"--in those cases I think the great programmer will write more code to avoid an overly clever design solution. Regardless, the idea that productivity can be measured cleanly as "lines of code per staff month" is subject to problems either way.</p>
<p>The problem with measuring productivity in terms of lines of code per staff month is the old Dilbert joke about Wally coding himself a minivan. If you measure productivity in terms of volume of code generated, some people will optimize for that measure, i.e., they will find ways to write more lines of code, even if more lines of code aren’t needed. This isn’t really a problem with this specific way of measuring productivity. This really just speaks to the management chestnut that "what gets measured gets done," so you need to be careful what you measure.</p>
<span>Productivity in Function Points</span><p>Some of the problems of "lines of code per staff month" can be avoided by measuring program size in function points rather than lines of code. Function points are a "synthetic" measure of program size in which inputs, outputs, queries, and files are counted to determine program size. An inefficient design/coding style won’t generate more function points, so function points aren’t subject to the same issues as lines of code. They are however subject to more practical issues, namely that to get an accurate count of function points you need the services of a certified function point counter (which most organizations don’t have available), and the mapping between how function points are counted and individual work packages is rough enough that it becomes impractical to use them to ascertain the productivity of individual programmers.</p>
<span>What about Complexity?</span><p>Managers frequently mention this issue: "I always give my best programmer the most difficult/most complex sections of code to work on. His productivity on any measured basis might very well be low compared to programmers who get easier assignments, but my other programmers would take twice as long." Yep. That’s a legitimate issue too.</p>
<span>Is There Any Way to Measure Individual Productivity? </span><p>Difficulties like these have led many people to conclude that measuring individual productivity is so fraught with problems that no one should even try. I think it is possible to measure individual productivity meaningfully, as long as you keep several key factors in mind.</p>
<p>1. Don’t expect any single dimensional measure of productivity to give you a very good picture of individual productivity. Think about all the statistics that are collected in sports. We can’t even use a single measure to determine how good a hitter in baseball is. We consider batting average, home runs, runs batted in, on-base percentage, and other factors--and then we still argue about what the numbers mean. If we can’t measure the "good hitter" using a simple measure, why would we expect we could measure something as complex as individual productivity using a simple measure? What we need to do instead is use a combination of measures, which collectively will give us insights into individual productivities. (Measures could include on-time task completion percentage, manager evaluation on a scale of 1-10, peer evaluation on a scale of 1-10, lines of code per staff month, defects reported per line of code, defects, fixed per line of code, bad fix injection rate, etc.)</p>
<p>2. Don’t expect any measures--whether single measures of a combination of measures--to support fine-grain discriminations in productivity among individuals. A good guideline is that measures of individual productivity give you questions to ask but they don’t give you the answers. Using measures of performance for, say, individual performance reviews is both bad management and bad statistics.</p>
<p>3. Remember that trends are usually more important than single-point measures. Measures of individual productivity tend to be far less useful in comparing one individual to another than they are in seeing how one individual is progressing over time.</p>
<p>4. Ask why you need to measure individual productivity at all. In a research setting, researchers need to measure productivity to assess the relative effectiveness of different techniques, and their use of these measures is subject to far fewer problems than measuring individual productivity on real projects is. In a real project environment, what do you want to use the measure(s) for? Performance reviews? Not a good idea for the reasons mentioned above. Task assignments? Most managers I talk with say they *know* who their star contributors are without measuring, and I believe them. Estimation? No, the variations caused by different design approaches, different task difficulty, and related factors make that an ineffective way to build up project estimates.</p>
<p>On real projects it’s hard to find a use for individual productivity measures that is both useful and statistically valid. In my experience, aside from research settings the attempt to measure individual performance arises most often from a desire to do something with the measurements that isn’t statistically valid. So while I see the value of measuring individual performance in research settings, I think it’s difficult to find cases in which the effort is justified on real projects.</p>
<p>(Measuring team productivity and organizational productivity is a different matter -- I'll blog about that soon).</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Chief_Programmer_Team_Update/?blogid=23485">
  <title>Chief Programmer Team Update</title>
  <link>https://www.construx.com/10x_Software_Development/Chief_Programmer_Team_Update/?blogid=23485</link>
  <description><![CDATA[<P>One spinoff from the 10x difference in programmer productivity was the Chief Programmer Team structure. The idea of the chief-programmer team was originally developed at IBM during the late 1960s (Baker 1972, Baker and Mills 1973). It was popularized by Fred Brooks in the Mythical Man-Month (Brooks 1975, 1995), in which Brooks referred to it as a surgical team. The two terms are interchangeable. I described the technique in my 1996 book <EM>Rapid Development</EM>, but I think we"ve learned some important lessons about the CPT structure since then. </P><P><STRONG>Original Chief Programmer Team Project</STRONG></P><P>The original chief programmer team project was conducted in the late 1960s. IBM commissioned to build an information retrieval system for the New York Times. The Chief Programmer on that project (the original Chief Programmer) was Harlan Mills, who created all the design and wrote all of the production code. He had eight other people arrayed around him in various support functions:</P><UL><LI>A "backup programmer" serves as the chief programmer’s alter ego. The backup programmer supports the chief programmer as critic, research assistant, technical contact for outside groups, and backup-up chief. </LI><LI>The "administrator" handles administrative matters such as money, people, space, and machines. The Chief has ultimate say about these matters, but the administrator frees the Chief from having to deal with them on a daily basis. </LI><LI>The "toolsmith" is responsible for creating custom tools requested by the Chief . In today’s terminology, the toolsmith would be in charge of maintaining the build environment, creating scripts, etc.  </LI><LI>The team is rounded out by a "language lawyer" who supports the Chief by answering esoteric questions about the programming language the Chief is using. </LI></UL><P>Several of the support roles suggested in the original chief-programmer proposal are now regularly performed by nonprogrammers--by documentation specialists, test specialists, and program managers. Other tasks such as word processing and version control have been simplified so much by modern software tools that they no longer need to be performed by support personnel. And the internet has reduced the need for language lawyers--many questions can be answered via a quick search on the web. </P><P><STRONG>Attempts to Replicate IBM’s Chief Programmer Team Results: Is 10x Good Enough? </STRONG></P><P>On the original project, Harlan Mills personally wrote 83,000 lines of production code in one year. He wrote that code on a batch mode operating system. And on punch cards! Even when you divide the 83,000 lines of code by the nine people on the project, that’s 9,200 lines of code per staff year, which is still in the ballpark of acceptable productivity for similar kinds of projects 40 years later. With productivity like that under those circumstances you can see why the IBM project was heralded as one of the most successful projects of its time. </P><P>In the years since that project many organizations have tried to implement Chief Programmer teams, and few have been able to repeat IBM’s initial stunning success. The achilles heel of the Chief Programmer Team model is that for it to make sense to organize staff the way they were organized on the IBM project, the Chief Programmer has to be more productive than <EM>everyone else on the team put together</EM>. On the original IBM project, Harlan Mills was a near-genius programmer who was an expert software methodologist, talented writer, exceptionally self-disciplined, and highly motivated. When he decided to roll up his sleeves and write code, he had few peers. Think "Kent Beck of His Day" and you’d be pretty close. He was one of the rare individuals truly capable of doing more work than the eight other people on his team put together. </P><P>In a <A href="http://forums.construx.com/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx">previous blog posting</A> I discussed the fact that numerous studies have found 10-fold variations in productivity between the best and worst programmers with similar levels of experience. For the CPT model to work, the Chief Programmer doesn’t have to be 10x as productive as the <EM>worst</EM> programmer. He has to do the work of eight or nine people put together, which means he has to be 10x as productive as the <EM>average </EM>programmer, not 10x as productive as the worst. That’s a very tall order. If you assume the best programmer is 10x as productive as the worst, then the best will be only something like 2-3 as productive as the average programmer, and that isn’t good enough for the CPT model to pay off with a total project team of nine people.</P><P>Another factor is that, while numerous studies have found 10x differences among individuals, researchers have <EM>not </EM>found 10-fold differences among programmers <EM>working within the same organizations</EM>. Some research has found that good programmers tend to cluster within certain companies, average programmers tend to cluster within other companies, and so on (Mills 1983). So even if there’s a 10x difference industrywide, the difference you’d typically see within a given company is more like 3-5x from best to worst, which means the difference from <EM>best to average</EM> is more like 1.5x or 2x within any given company. </P><P><STRONG>Bottom line: </STRONG>The Chief Programmer Team organization can make sense in the rare case in which you have a near genius on your staff--one that is dramatically more productive than the average programmer on your staff. But from I"ve seen there are far fewer near geniuses than there are near genius wannabees, and that limits the applicability of the technique. </P><P><STRONG>Resources </STRONG></P><UL><LI>Construx"s <A href="http://www.construx.com/Page.aspx?nid=17&amp;id=99">10x Software Development</A> Seminar. My company"s answer to the question of "What does it take to move a team toward the 10x end of the scale?"</LI><LI>My earlier earlier blog posting on <A href="http://forums.construx.com/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx">10x differences in productivity</A>.</LI><LI>My earlier blog posting on <A href="http://forums.construx.com/blogs/stevemcc/archive/2007/08/12/how-to-self-study-for-a-computer-programming-job.aspx">self-education</A>. </LI><LI>Construx"s web resources for <A href="http://www.construx.com/professionaldev">professional development</A>. </LI></UL><P><STRONG>References</STRONG></P><P>Brooks, Frederick P., Jr. <EM>The Mythical Man-Month</EM>, Reading Massachusetts: Addison-Wesley, 1975.</P><P>Brooks, Frederick P., Jr. <EM>The Mythical Man-Month</EM>, <EM>Anniversary Edition</EM>, Reading Massachusetts: Addison-Wesley, 1995.</P><P>Baker, F. Terry. "Chief Programmer Team Management of Production Programming," <EM>IBM Systems Journal</EM>, vol. 11, no. 1, 1972, pp. 56-73. </P><P>Baker, F. Terry and Harlan D. Mills. "Chief Programmer Teams." <EM>Datamation</EM>, Volume 19, Number 12 (December 1973), pp. 58-61.</P><P>McConnell, Steve. <EM>Rapid Development</EM>. Microsoft Press, 1996. </P><P>Mills, Harlan D. <EM>Software Productivity</EM>, Boston, Massachusetts: Little, Brown, 1983.</P>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-03-31T17:28:00Z</dc:date>
  <content:encoded><![CDATA[<p>One spinoff from the 10x difference in programmer productivity was the Chief Programmer Team structure. The idea of the chief-programmer team was originally developed at IBM during the late 1960s (Baker 1972, Baker and Mills 1973). It was popularized by Fred Brooks in the Mythical Man-Month (Brooks 1975, 1995), in which Brooks referred to it as a surgical team. The two terms are interchangeable. I described the technique in my 1996 book <em>Rapid Development</em>, but I think we've learned some important lessons about the CPT structure since then.</p>
<h2><span>Original Chief Programmer Team Project</span></h2>
<p>The original chief programmer team project was conducted in the late 1960s. IBM commissioned to build an information retrieval system for the New York Times. The Chief Programmer on that project (the original Chief Programmer) was Harlan Mills, who created all the design and wrote all of the production code. He had eight other people arrayed around him in various support functions:</p>
<ul>
<li>A "backup programmer" serves as the chief programmer’s alter ego. The backup programmer supports the chief programmer as critic, research assistant, technical contact for outside groups, and backup-up chief. </li>
<li>The "administrator" handles administrative matters such as money, people, space, and machines. The Chief has ultimate say about these matters, but the administrator frees the Chief from having to deal with them on a daily basis. </li>
<li>The "toolsmith" is responsible for creating custom tools requested by the Chief . In today’s terminology, the toolsmith would be in charge of maintaining the build environment, creating scripts, etc. </li>
<li>The team is rounded out by a "language lawyer" who supports the Chief by answering esoteric questions about the programming language the Chief is using. </li>
</ul>
<p>Several of the support roles suggested in the original chief-programmer proposal are now regularly performed by nonprogrammers--by documentation specialists, test specialists, and program managers. Other tasks such as word processing and version control have been simplified so much by modern software tools that they no longer need to be performed by support personnel. And the internet has reduced the need for language lawyers--many questions can be answered via a quick search on the web.</p>
<span>Attempts to Replicate IBM’s Chief Programmer Team Results: Is 10x Good Enough?</span><p>On the original project, Harlan Mills personally wrote 83,000 lines of production code in one year. He wrote that code on a batch mode operating system. And on punch cards! Even when you divide the 83,000 lines of code by the nine people on the project, that’s 9,200 lines of code per staff year, which is still in the ballpark of acceptable productivity for similar kinds of projects 40 years later. With productivity like that under those circumstances you can see why the IBM project was heralded as one of the most successful projects of its time.</p>
<p>In the years since that project many organizations have tried to implement Chief Programmer teams, and few have been able to repeat IBM’s initial stunning success. The achilles heel of the Chief Programmer Team model is that for it to make sense to organize staff the way they were organized on the IBM project, the Chief Programmer has to be more productive than <em>everyone else on the team put together</em>. On the original IBM project, Harlan Mills was a near-genius programmer who was an expert software methodologist, talented writer, exceptionally self-disciplined, and highly motivated. When he decided to roll up his sleeves and write code, he had few peers. Think "Kent Beck of His Day" and you’d be pretty close. He was one of the rare individuals truly capable of doing more work than the eight other people on his team put together.</p>
<p>In a <a href="https://www.construx.com/10x_Software_Development/Productivity_Variations_Among_Software_Developers_and_Teams__The_Origin_of_10x/">previous blog posting</a> I discussed the fact that numerous studies have found 10-fold variations in productivity between the best and worst programmers with similar levels of experience. For the CPT model to work, the Chief Programmer doesn’t have to be 10x as productive as the <em>worst</em> programmer. He has to do the work of eight or nine people put together, which means he has to be 10x as productive as the <em>average </em>programmer, not 10x as productive as the worst. That’s a very tall order. If you assume the best programmer is 10x as productive as the worst, then the best will be only something like 2-3 as productive as the average programmer, and that isn’t good enough for the CPT model to pay off with a total project team of nine people.</p>
<p>Another factor is that, while numerous studies have found 10x differences among individuals, researchers have <em>not </em>found 10-fold differences among programmers <em>working within the same organizations</em>. Some research has found that good programmers tend to cluster within certain companies, average programmers tend to cluster within other companies, and so on (Mills 1983). So even if there’s a 10x difference industrywide, the difference you’d typically see within a given company is more like 3-5x from best to worst, which means the difference from <em>best to average</em> is more like 1.5x or 2x within any given company.</p>
<p><strong>Bottom line: </strong>The Chief Programmer Team organization can make sense in the rare case in which you have a near genius on your staff--one that is dramatically more productive than the average programmer on your staff. But from I've seen there are far fewer near geniuses than there are near genius wannabees, and that limits the applicability of the technique.</p>
<h2><span>Resources </span></h2>
<ul>
<li>Construx's <a title="10x Software Development" href="http://www.construx.com/10x_Software_Engineering/">10x Software Development</a> Seminar. My company's answer to the question of "What does it take to move a team toward the 10x end of the scale?"</li>
<li>My earlier earlier blog posting on <a href="https://www.construx.com/10x_Software_Development/Productivity_Variations_Among_Software_Developers_and_Teams__The_Origin_of_10x/">10x differences in productivity</a>.</li>
<li>My earlier blog posting on <a href="https://www.construx.com/10x_Software_Development/How_to_Self-Study_for_a_Computer_Programming_Job/">self-education</a>. </li>
<li>Construx's web resources for <a href="https://www.construx.com/Resources/Professional_Development/">professional development</a>. </li>
</ul>
<h2><span>References</span></h2>
<p>Brooks, Frederick P., Jr. The Mythical Man-Month, Reading Massachusetts: Addison-Wesley, 1975.</p>
<p>Brooks, Frederick P., Jr. <em>The Mythical Man-Month</em>, <em>Anniversary Edition</em>, Reading Massachusetts: Addison-Wesley, 1995.</p>
<p>Baker, F. Terry. "Chief Programmer Team Management of Production Programming," <em>IBM Systems Journal</em>, vol. 11, no. 1, 1972, pp. 56-73.</p>
<p>Baker, F. Terry and Harlan D. Mills. "Chief Programmer Teams." <em>Datamation</em>, Volume 19, Number 12 (December 1973), pp. 58-61.</p>
<p>McConnell, Steve. <em>Rapid Development</em>. Microsoft Press, 1996.</p>
<p>Mills, Harlan D. <em>Software Productivity</em>, Boston, Massachusetts: Little, Brown, 1983.</p>]]></content:encoded>
 </item>
 <item rdf:about="/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx?blogid=23485">
  <title>Productivity Variations Among Software Developers and Teams: The Origin of 10x</title>
  <link>https://www.construx.com/blogs/stevemcc/archive/2008/03/27/productivity-variations-among-software-developers-and-teams-the-origin-of-quot-10x-quot.aspx?blogid=23485</link>
  <description><![CDATA[<p>Some blog readers have asked for more background on where the "10x" name of this blog came from. The gist of the name is that researchers have found 10-fold differences in productivity and quality between different programmers with the same levels of experience and also between different teams working within the same industries.</p>
<span>Individual Productivity Variation in Software Development</span><p>The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.</p>
<p>Detailed examination of Sackman, Erickson, and Grant's findings shows some flaws in their methodology (including combining results from programmers working in low level programming languages with those working in high level programming languages). However, even after accounting for the flaws, their data still shows more than a 10-fold difference between the best programmers and the worst.</p>
<p>In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).</p>
<p>There is also lots of anecdotal support for the large variation between programmers. During the time I was at Boeing in the mid 1980s, there was a project that had about 80 programmers working on it that was at risk of missing a critical deadline. The project was critical to Boeing, and so they moved most of the 80 people off that project and brought in <i>one guy </i>who finished all the coding and delivered the software on time. I didn't work on that project, and I didn't know the guy, so I'm not 100% sure the story is even true. But I heard the story from someone I trusted, and it seemed true at the time.</p>
<p>This degree of variation isn't unique to software. A study by Norm Augustine found that in a variety of professions--writing, football, invention, police work, and other occupations--the top 20 percent of the people produced about 50 percent of the output, whether the output is touchdowns, patents, solved cases, or software (Augustine 1979). When you think about it, this just makes sense. We've all known people who are exceptional students, exceptional athletes, exceptional artists, exceptional parents--these differences are just part of the human experience; why would we expect software development to be any different?</p>
<span>Extremes in Individual Variation on the Bad Side</span><p>Augustine's study observed that, since some people make no tangible contribution whatsoever (quarterbacks who make no touchdowns, inventors who own no patents, detectives who don’t close cases, and so on), the data probably understates the actual variation in productivity.</p>
<p>This appears to be true in software. In several of the published studies on software productivity, about 10% of the subjects in the experiments weren't able to complete the experimental assignment. In the studies, the write ups say, "Therefore those experimental subjects' results were excluded from our data set." But in real life if someone "doesn't complete the assignment" you can't just "exclude their results from the data set." You have to wait for them to finish, assign someone else to do their work, and so on. The interesting (and frightening) implication of this is that something like 10% of the people working in the software field might actually be contributing *negative&amp; productivity to their projects. Again, this lines up well with real-world experience. I think many of us can think of specific people we've worked with who fit that description.</p>
<span>Team Productivity Variation in Software Development</span><p>Software experts have long observed that team productivity varies about as much as individual productivity does--by an order of magnitude (Mills 1983). Part of the reason is that good programmers tend to cluster in some organizations, and bad programmers tend to cluster in other organizations, an observation that has been confirmed by a study of 166 professional programmers from 18 organizations (Demarco and Lister 1999).</p>
<p>In one study of seven identical projects, the efforts expended varied by a factor of 3.4 to 1 and program sizes by a factor of 3 to 1 (Boehm, Gray, and Seewaldt 1984). In spite of the productivity range, the programmers in this study were not a diverse group. They were all professional programmers with several years of experience who were enrolled in a computer-science graduate program. It’s reasonable to assume that a study of a less homogeneous group would turn up even greater differences.<br /><br />An earlier study of programming teams observed a 5-to-1 difference in program size and a 2.6-to-1 variation in the time required for a team to complete the same project (Weinberg and Schulman 1974).</p>
<p>After reviewing data more than 20 years of data in constructing the Cocomo II estimation model, Barry Boehm and other researchers concluded that developing a program with a team in the 15th percentile of programmers ranked by ability typically requires about 3.5 times as many staff-months as developing a program with a team in the 90th percentile (Boehm et al 2000). The difference will be much greater if one team is more experienced than the other in the programming language or in the application area or in both.</p>
<p>One specific data point is the difference in productivity between Lotus 123 version 3 and Microsoft Excel 3.0. Both were desktop spreadsheet applications completed in the 1989-1990 timeframe. Finding cases in which two companies publish data on such similar projects is rare, which makes this head-to-head comparison especially interesting. The results of these two projects were as follows: Excel took 50 staff years to produce 649,000 lines of code. Lotus 123 took 260 staff years to produce 400,000 lines of code. Excel's team produced about 13,000 lines of code per staff year. Lotus's team produced 1,500 lines of code per staff year. The difference in productivity between the two teams was more than a factor of 8, which supports the general claim of order-of-magnitude differences not just between different individuals but also between different project teams.</p>
<span>What Have You Seen? </span><p>Have you seen 10;1 differences in capabilities between different individuals? Between different teams? How much better was the best programmer you've worked with than the worst? Does 10:1 even cover the range?</p>
<p>I look forward to hearing your thoughts.</p>
<span>References</span><p>Augustine, N. R. 1979. "Augustine’s Laws and Major System Development Programs." Defense Systems Management Review: 50-76.</p>
<p>Boehm, Barry W., and Philip N. Papaccio. 1988. "Understanding and Controlling Software Costs." IEEE Transactions on Software Engineering SE-14, no. 10 (October): 1462-77.</p>
<p>Boehm, Barry, et al, 2000. Software Cost Estimation with Cocomo II, Boston, Mass.: Addison Wesley, 2000.</p>
<p>Boehm, Barry W., T. E. Gray, and T. Seewaldt. 1984. "Prototyping Versus Specifying: A Multiproject Experiment." IEEE Transactions on Software Engineering SE-10, no. 3 (May): 290-303. Also in Jones 1986b.</p>
<p>Card, David N. 1987. "A Software Technology Evaluation Program." Information and Software Technology 29, no. 6 (July/August): 291-300.</p>
<p>Curtis, Bill. 1981. "Substantiating Programmer Variability." Proceedings of the IEEE 69, no. 7: 846.</p>
<p>Curtis, Bill, et al. 1986. "Software Psychology: The Need for an Interdisciplinary Program." Proceedings of the IEEE 74, no. 8: 1092-1106.</p>
<p>DeMarco, Tom, and Timothy Lister. 1985. "Programmer Performance and the Effects of the Workplace." Proceedings of the 8th International Conference on Software Engineering. Washington, D.C.: IEEE Computer Society Press, 268-72.</p>
<p>DeMarco, Tom and Timothy Lister, 1999. Peopleware: Productive Projects and Teams, 2d Ed. New York: Dorset House, 1999.</p>
<p>Mills, Harlan D. 1983. Software Productivity. Boston, Mass.: Little, Brown.</p>
<p>Sackman, H., W.J. Erikson, and E. E. Grant. 1968. "Exploratory Experimental Studies Comparing Online and Offline Programming Performance." Communications of the ACM 11, no. 1 (January): 3-11.</p>
<p>Valett, J., and F. E. McGarry. 1989. "A Summary of Software Measurement Experiences in the Software Engineering Laboratory." Journal of Systems and Software 9, no. 2 (February): 137-48.</p>
<p>Weinberg, Gerald M., and Edward L. Schulman. 1974. "Goals and Performance in Computer Programming." Human Factors 16, no. 1 (February): 70-77.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-03-27T10:15:00Z</dc:date>
  <content:encoded><![CDATA[<p>Some blog readers have asked for more background on where the "10x" name of this blog came from. The gist of the name is that researchers have found 10-fold differences in productivity and quality between different programmers with the same levels of experience and also between different teams working within the same industries.</p>
<span>Individual Productivity Variation in Software Development</span><p>The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.</p>
<p>Detailed examination of Sackman, Erickson, and Grant's findings shows some flaws in their methodology (including combining results from programmers working in low level programming languages with those working in high level programming languages). However, even after accounting for the flaws, their data still shows more than a 10-fold difference between the best programmers and the worst.</p>
<p>In years since the original study, the general finding that "There are order-of-magnitude differences among programmers" has been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al 2000).</p>
<p>There is also lots of anecdotal support for the large variation between programmers. During the time I was at Boeing in the mid 1980s, there was a project that had about 80 programmers working on it that was at risk of missing a critical deadline. The project was critical to Boeing, and so they moved most of the 80 people off that project and brought in <i>one guy </i>who finished all the coding and delivered the software on time. I didn't work on that project, and I didn't know the guy, so I'm not 100% sure the story is even true. But I heard the story from someone I trusted, and it seemed true at the time.</p>
<p>This degree of variation isn't unique to software. A study by Norm Augustine found that in a variety of professions--writing, football, invention, police work, and other occupations--the top 20 percent of the people produced about 50 percent of the output, whether the output is touchdowns, patents, solved cases, or software (Augustine 1979). When you think about it, this just makes sense. We've all known people who are exceptional students, exceptional athletes, exceptional artists, exceptional parents--these differences are just part of the human experience; why would we expect software development to be any different?</p>
<span>Extremes in Individual Variation on the Bad Side</span><p>Augustine's study observed that, since some people make no tangible contribution whatsoever (quarterbacks who make no touchdowns, inventors who own no patents, detectives who don’t close cases, and so on), the data probably understates the actual variation in productivity.</p>
<p>This appears to be true in software. In several of the published studies on software productivity, about 10% of the subjects in the experiments weren't able to complete the experimental assignment. In the studies, the write ups say, "Therefore those experimental subjects' results were excluded from our data set." But in real life if someone "doesn't complete the assignment" you can't just "exclude their results from the data set." You have to wait for them to finish, assign someone else to do their work, and so on. The interesting (and frightening) implication of this is that something like 10% of the people working in the software field might actually be contributing *negative&amp; productivity to their projects. Again, this lines up well with real-world experience. I think many of us can think of specific people we've worked with who fit that description.</p>
<span>Team Productivity Variation in Software Development</span><p>Software experts have long observed that team productivity varies about as much as individual productivity does--by an order of magnitude (Mills 1983). Part of the reason is that good programmers tend to cluster in some organizations, and bad programmers tend to cluster in other organizations, an observation that has been confirmed by a study of 166 professional programmers from 18 organizations (Demarco and Lister 1999).</p>
<p>In one study of seven identical projects, the efforts expended varied by a factor of 3.4 to 1 and program sizes by a factor of 3 to 1 (Boehm, Gray, and Seewaldt 1984). In spite of the productivity range, the programmers in this study were not a diverse group. They were all professional programmers with several years of experience who were enrolled in a computer-science graduate program. It’s reasonable to assume that a study of a less homogeneous group would turn up even greater differences.<br /><br />An earlier study of programming teams observed a 5-to-1 difference in program size and a 2.6-to-1 variation in the time required for a team to complete the same project (Weinberg and Schulman 1974).</p>
<p>After reviewing data more than 20 years of data in constructing the Cocomo II estimation model, Barry Boehm and other researchers concluded that developing a program with a team in the 15th percentile of programmers ranked by ability typically requires about 3.5 times as many staff-months as developing a program with a team in the 90th percentile (Boehm et al 2000). The difference will be much greater if one team is more experienced than the other in the programming language or in the application area or in both.</p>
<p>One specific data point is the difference in productivity between Lotus 123 version 3 and Microsoft Excel 3.0. Both were desktop spreadsheet applications completed in the 1989-1990 timeframe. Finding cases in which two companies publish data on such similar projects is rare, which makes this head-to-head comparison especially interesting. The results of these two projects were as follows: Excel took 50 staff years to produce 649,000 lines of code. Lotus 123 took 260 staff years to produce 400,000 lines of code. Excel's team produced about 13,000 lines of code per staff year. Lotus's team produced 1,500 lines of code per staff year. The difference in productivity between the two teams was more than a factor of 8, which supports the general claim of order-of-magnitude differences not just between different individuals but also between different project teams.</p>
<span>What Have You Seen? </span><p>Have you seen 10;1 differences in capabilities between different individuals? Between different teams? How much better was the best programmer you've worked with than the worst? Does 10:1 even cover the range?</p>
<p>I look forward to hearing your thoughts.</p>
<span>References</span><p>Augustine, N. R. 1979. "Augustine’s Laws and Major System Development Programs." Defense Systems Management Review: 50-76.</p>
<p>Boehm, Barry W., and Philip N. Papaccio. 1988. "Understanding and Controlling Software Costs." IEEE Transactions on Software Engineering SE-14, no. 10 (October): 1462-77.</p>
<p>Boehm, Barry, et al, 2000. Software Cost Estimation with Cocomo II, Boston, Mass.: Addison Wesley, 2000.</p>
<p>Boehm, Barry W., T. E. Gray, and T. Seewaldt. 1984. "Prototyping Versus Specifying: A Multiproject Experiment." IEEE Transactions on Software Engineering SE-10, no. 3 (May): 290-303. Also in Jones 1986b.</p>
<p>Card, David N. 1987. "A Software Technology Evaluation Program." Information and Software Technology 29, no. 6 (July/August): 291-300.</p>
<p>Curtis, Bill. 1981. "Substantiating Programmer Variability." Proceedings of the IEEE 69, no. 7: 846.</p>
<p>Curtis, Bill, et al. 1986. "Software Psychology: The Need for an Interdisciplinary Program." Proceedings of the IEEE 74, no. 8: 1092-1106.</p>
<p>DeMarco, Tom, and Timothy Lister. 1985. "Programmer Performance and the Effects of the Workplace." Proceedings of the 8th International Conference on Software Engineering. Washington, D.C.: IEEE Computer Society Press, 268-72.</p>
<p>DeMarco, Tom and Timothy Lister, 1999. Peopleware: Productive Projects and Teams, 2d Ed. New York: Dorset House, 1999.</p>
<p>Mills, Harlan D. 1983. Software Productivity. Boston, Mass.: Little, Brown.</p>
<p>Sackman, H., W.J. Erikson, and E. E. Grant. 1968. "Exploratory Experimental Studies Comparing Online and Offline Programming Performance." Communications of the ACM 11, no. 1 (January): 3-11.</p>
<p>Valett, J., and F. E. McGarry. 1989. "A Summary of Software Measurement Experiences in the Software Engineering Laboratory." Journal of Systems and Software 9, no. 2 (February): 137-48.</p>
<p>Weinberg, Gerald M., and Edward L. Schulman. 1974. "Goals and Performance in Computer Programming." Human Factors 16, no. 1 (February): 70-77.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/How_to_Scale_Up_Quickly/?blogid=23485">
  <title>How to Scale Up Quickly</title>
  <link>https://www.construx.com/10x_Software_Development/How_to_Scale_Up_Quickly/?blogid=23485</link>
  <description><![CDATA[<p>The question of how to scale up quickly in a software startup company is a perennially tough issue. There are some good ways to get started -- starting with a core of really senior people is one time-honored approach. Starting with a core team of people who have worked together at another employer is another approach that often works. The question, though, is how do you scale up <em>beyond </em>that core, and how do you scale up <em>quickly</em>?</p>
<p>I think it's an especially tough issue for people who are process oriented.&#160;If an organization is already pretty good sized, then having well defined and efficient processes can be supportive of scaling up quickly. Telecordia added something like 1000 people to its technical staff in the year it was first assessed at CMM Level 5. But that's a very large organization to start with. If you're in startup mode I think it's hard to add staff quickly without your organization's software practices reverting to whatever the industry mean in your geographic area is -- the new staff just has too strong a dilution effect on the existing staff for it to work any other way.</p>
<p>Trying to startup quickly by outsourcing is a dead end as far as I'm concerned, especially to India where turnover is so high. Offshore captives can work, but minimum workable size seems to be about 100 people, and it probably takes 2-3 years of ramp up to get to financial break even. It's hard to find a time-to-market gain in this approach.</p>
<p>So I think the only strategy that has much chance of working is being very, very selective about hiring, co-locating everyone at one facility, and making sure everyone has lots and lots of opportunity to interact both formally and informally, i.e., in addition to meetings you sponsor lots of morale events -- Friday afternoon beer busts, pizza &amp; movie nights at work, trips to football games, dinner at the boss's house, etc. You won't be able to guarantee that the new people will work in ways that are consistent with how the existing people are working, but at least they'll work in ways that are intelligent and they'll be cooperating well. After things slow down a little you can go back in and try to establish more work conventions. You hope!</p>
<p><em>What do you think? In your experience, what are the best ways for a startup to scale up quickly?</em></p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2008-03-18T08:22:00Z</dc:date>
  <content:encoded><![CDATA[<p>The question of how to scale up quickly in a software startup company is a perennially tough issue. There are some good ways to get started -- starting with a core of really senior people is one time-honored approach. Starting with a core team of people who have worked together at another employer is another approach that often works. The question, though, is how do you scale up <em>beyond </em>that core, and how do you scale up <em>quickly</em>?</p>
<p>I think it's an especially tough issue for people who are process oriented.&#160;If an organization is already pretty good sized, then having well defined and efficient processes can be supportive of scaling up quickly. Telecordia added something like 1000 people to its technical staff in the year it was first assessed at CMM Level 5. But that's a very large organization to start with. If you're in startup mode I think it's hard to add staff quickly without your organization's software practices reverting to whatever the industry mean in your geographic area is -- the new staff just has too strong a dilution effect on the existing staff for it to work any other way.</p>
<p>Trying to startup quickly by outsourcing is a dead end as far as I'm concerned, especially to India where turnover is so high. Offshore captives can work, but minimum workable size seems to be about 100 people, and it probably takes 2-3 years of ramp up to get to financial break even. It's hard to find a time-to-market gain in this approach.</p>
<p>So I think the only strategy that has much chance of working is being very, very selective about hiring, co-locating everyone at one facility, and making sure everyone has lots and lots of opportunity to interact both formally and informally, i.e., in addition to meetings you sponsor lots of morale events -- Friday afternoon beer busts, pizza &amp; movie nights at work, trips to football games, dinner at the boss's house, etc. You won't be able to guarantee that the new people will work in ways that are consistent with how the existing people are working, but at least they'll work in ways that are intelligent and they'll be cooperating well. After things slow down a little you can go back in and try to establish more work conventions. You hope!</p>
<p><em>What do you think? In your experience, what are the best ways for a startup to scale up quickly?</em></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Software_Development_Seminars_in_New_York_City/?blogid=23485">
  <title>Software Development Seminars in New York City</title>
  <link>https://www.construx.com/10x_Software_Development/Software_Development_Seminars_in_New_York_City/?blogid=23485</link>
  <description><![CDATA[<p>I'll be in New York City next week teaching "Software Estimation in Depth." This is an enjoyable class to teach. It has great lab exercises, and it's fun to see the lightbulbs going off in people's heads as they "get" the key concepts in software estimation. You can read more about the class here: <a href="/Seminars/?dm=0">http://www.construx.com//Seminars/?dm=0</a>.</p>
<p>My company's also teaching several other classes in New York City next week (the only time in 2008 we'll be doing open-enrollment seminars on the east coast). Other classes are:</p>
<ul>
<li>How to be Agile without Being Extreme</li>
<li>10x Software Engineering</li>
<li>Requirements Boot Camp</li>
<li>Software Project Management Boot Camp</li>
<li>Design Boot Camp</li>
</ul>
<p>You can read more about all these classes at <a href="/Thought_Leadership/Events/Practical_benefits_profound_results/">http://www.construx.com//Thought_Leadership/Events/Practical_benefits_profound_results/</a>.</p>
<p>Here are more detailed summaries of these classes.</p>
<p><b class="header_sub2">Software Estimation in Depth</b>, <b>March 24-25, 2008</b>, Steve McConnell<b>,</b> Instructor<b> </b><a href="/Seminars/?dm=0">Details</a></p>
<p>This course focuses on providing many useful rules of thumb and procedures for creating software estimates ("the art of estimation") and a brief introduction to mathematical approaches to creating software project estimates ("the science of estimation"). This course provides techniques for making sure estimation is treated as an analytical rather than a political process. It explains how to negotiate effectively with other project stakeholders (such as marketing, management and your clients) so that everyone wins. The course features extensive lab work to give you hands-on experience creating many different kinds of software estimates. This seminar will be taught by Steve McConnell, author of <i>Code Complete</i>, <i>Rapid Development</i>, and <i>Software Estimation: Demystifying the Black Art</i>.<a href="/Seminars/?dm=0">More &gt;</a></p>
<p><b>How to be Agile Without Being Extreme</b>, <b>March 24-25, 2008</b>, Jerry Deville, Instructor<b> </b><a href="/Seminars/?dm=0">Details</a></p>
<p>Agile software development promises low overhead, high flexibility, and satisfied customers, but how do you separate the hype from the reality? Leading organizations have benefited from Agile development practices for many years. Learn how to select and deploy today’s most powerful Agile practices. Apply the essentials of Scrum, Extreme Programming, Crystal, Lean, and other Agile methods. This intensive seminar presents modern practices combined with decades of time-tested, low-risk methods–all with a track record of proven results. This seminar is based on Construx’s experience working with companies that have successfully deployed Agile practices--and our experiences with companies whose agile projects have failed. Extensive case studies and hands-on exercises will show you how to select and apply the particular Agile development techniques that are best for your specific projects.<a href="/Seminars/?dm=0">More &gt;</a></p>
<p><b>10x Software Engineering</b>, <b>March 26-28, 2008</b>, Matt Peloquin, Instructor<b> </b><a href="/Seminars/?dm=0">Details</a></p>
<p>Decades of research have found at least a ten-fold “10x” difference in productivity and quality between the best developers and the worst–and between the best teams and the worst. Discover the 5 Key Principles of 10x Engineering and avoid the productivity traps of “minus-x” engineering. Practice critical techniques that will turn your team into a high performing, 10x Team.<a href="/Seminars/?dm=0"> More &gt;</a></p>
<p><b>Requirements Boot Camp</b>, <b>March 26-28, 2008</b>, Earl Beede, Instructor<b> </b><a href="/Seminars/?dm=0">Details</a></p>
<p>What is the most frequently reported cause of software project failure–regardless of project size or type of software? <i>Requirements challenges.</i> Discover how leading-edge companies use requirements engineering to support successful software projects. Learn the three purposes of requirements and how to distinguish between requirements fantasies and requirements reality. Practice practical techniques for exploring user needs, capturing requirements, controlling changes, and building highly satisfactory software.<a href="/Seminars/?dm=0">More &gt;</a></p>
<p><b>Software Project Management Boot Camp</b>, <b>March 26-28, 2008</b>, Jerry Deville, Instructor<b> </b><a href="/Seminars/?dm=0">Details</a></p>
<p>Leading any project can be a challenge. Leading a software project can be even more challenging if you're new to project management or new to software. This seminar will help you make the transition to solid software project leadership. Software Project Management Boot Camp teaches you the concepts and techniques necessary to manage projects successfully. This seminar closely follows the Project Management Institute's (PMI) Project Management Body of Knowledge (PM-BOK) and shows how to apply these best practices to a typical small to medium sized software project. <a href="/Seminars/?dm=0">More &gt;</a></p>
<p>Design Boot Camp,<b>March 26-28, 2008, </b>Steve Tockey, Instructor <a href="/Seminars/?dm=0">Details</a></p>
<p>Different designers will create designs that differ by at least a factor of 10 in the code volume produced. How do you invent simple, straightforward designs and avoid complex, error-prone designs? Understand the fundamental design principles that lead to high-quality designs requiring low implementation effort. Learn both Agile and traditional approaches to create great designs quickly and economically. <a href="/Seminars/?dm=0">More &gt;</a></p>]]></description>
  <dc:creator>johnc</dc:creator>
  <dc:date>2008-03-18T08:15:00Z</dc:date>
  <content:encoded><![CDATA[<p>I'll be in New York City next week teaching "Software Estimation in Depth." This is an enjoyable class to teach. It has great lab exercises, and it's fun to see the lightbulbs going off in people's heads as they "get" the key concepts in software estimation. You can read more about the class here: <a href="https://www.construx.com/Seminars/?dm=0">http://www.construx.com//Seminars/?dm=0</a>.</p>
<p>My company's also teaching several other classes in New York City next week (the only time in 2008 we'll be doing open-enrollment seminars on the east coast). Other classes are:</p>
<ul>
<li>How to be Agile without Being Extreme</li>
<li>10x Software Engineering</li>
<li>Requirements Boot Camp</li>
<li>Software Project Management Boot Camp</li>
<li>Design Boot Camp</li>
</ul>
<p>You can read more about all these classes at <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">http://www.construx.com//Thought_Leadership/Events/Practical_benefits_profound_results/</a>.</p>
<p>Here are more detailed summaries of these classes.</p>
<p><b class="header_sub2">Software Estimation in Depth</b>, <b>March 24-25, 2008</b>, Steve McConnell<b>,</b> Instructor<b> </b><a href="https://www.construx.com/Seminars/?dm=0">Details</a></p>
<p>This course focuses on providing many useful rules of thumb and procedures for creating software estimates ("the art of estimation") and a brief introduction to mathematical approaches to creating software project estimates ("the science of estimation"). This course provides techniques for making sure estimation is treated as an analytical rather than a political process. It explains how to negotiate effectively with other project stakeholders (such as marketing, management and your clients) so that everyone wins. The course features extensive lab work to give you hands-on experience creating many different kinds of software estimates. This seminar will be taught by Steve McConnell, author of <i>Code Complete</i>, <i>Rapid Development</i>, and <i>Software Estimation: Demystifying the Black Art</i>.<a href="https://www.construx.com/Seminars/?dm=0">More &gt;</a></p>
<p><b>How to be Agile Without Being Extreme</b>, <b>March 24-25, 2008</b>, Jerry Deville, Instructor<b> </b><a href="https://www.construx.com/Seminars/?dm=0">Details</a></p>
<p>Agile software development promises low overhead, high flexibility, and satisfied customers, but how do you separate the hype from the reality? Leading organizations have benefited from Agile development practices for many years. Learn how to select and deploy today’s most powerful Agile practices. Apply the essentials of Scrum, Extreme Programming, Crystal, Lean, and other Agile methods. This intensive seminar presents modern practices combined with decades of time-tested, low-risk methods–all with a track record of proven results. This seminar is based on Construx’s experience working with companies that have successfully deployed Agile practices--and our experiences with companies whose agile projects have failed. Extensive case studies and hands-on exercises will show you how to select and apply the particular Agile development techniques that are best for your specific projects.<a href="https://www.construx.com/Seminars/?dm=0">More &gt;</a></p>
<p><b>10x Software Engineering</b>, <b>March 26-28, 2008</b>, Matt Peloquin, Instructor<b> </b><a href="https://www.construx.com/Seminars/?dm=0">Details</a></p>
<p>Decades of research have found at least a ten-fold “10x” difference in productivity and quality between the best developers and the worst–and between the best teams and the worst. Discover the 5 Key Principles of 10x Engineering and avoid the productivity traps of “minus-x” engineering. Practice critical techniques that will turn your team into a high performing, 10x Team.<a href="https://www.construx.com/Seminars/?dm=0"> More &gt;</a></p>
<p><b>Requirements Boot Camp</b>, <b>March 26-28, 2008</b>, Earl Beede, Instructor<b> </b><a href="https://www.construx.com/Seminars/?dm=0">Details</a></p>
<p>What is the most frequently reported cause of software project failure–regardless of project size or type of software? <i>Requirements challenges.</i> Discover how leading-edge companies use requirements engineering to support successful software projects. Learn the three purposes of requirements and how to distinguish between requirements fantasies and requirements reality. Practice practical techniques for exploring user needs, capturing requirements, controlling changes, and building highly satisfactory software.<a href="https://www.construx.com/Seminars/?dm=0">More &gt;</a></p>
<p><b>Software Project Management Boot Camp</b>, <b>March 26-28, 2008</b>, Jerry Deville, Instructor<b> </b><a href="https://www.construx.com/Seminars/?dm=0">Details</a></p>
<p>Leading any project can be a challenge. Leading a software project can be even more challenging if you're new to project management or new to software. This seminar will help you make the transition to solid software project leadership. Software Project Management Boot Camp teaches you the concepts and techniques necessary to manage projects successfully. This seminar closely follows the Project Management Institute (PMI)<sup>®</sup> <i>A Guide to the Project Management Body of Knowledge (PMBOK<sup>®</sup> Guide)</i> and shows how to apply these best practices to a typical small to medium sized software project. <a href="https://www.construx.com/Seminars/?dm=0">More &gt;</a></p>
<p>Design Boot Camp,<b>March 26-28, 2008, </b>Steve Tockey, Instructor <a href="https://www.construx.com/Seminars/?dm=0">Details</a></p>
<p>Different designers will create designs that differ by at least a factor of 10 in the code volume produced. How do you invent simple, straightforward designs and avoid complex, error-prone designs? Understand the fundamental design principles that lead to high-quality designs requiring low implementation effort. Learn both Agile and traditional approaches to create great designs quickly and economically. <a href="https://www.construx.com/Seminars/?dm=0">More &gt;</a></p>
<p>&#160;</p>
<p>PMI and PMBOK are registered marks of the Project Management Institute, Inc.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Technical_Debt_Decision_Making/?blogid=23485">
  <title>Technical Debt Decision Making</title>
  <link>https://www.construx.com/10x_Software_Development/Technical_Debt_Decision_Making/?blogid=23485</link>
  <description><![CDATA[<p><em>[This is an expansion of one of my comments on an earlier </em><a title="Technical Debt" href="/10x_Software_Development/Technical_Debt/ "><em>Technical Debt post</em></a><em>]</em></p>
<p>When you get to a point where you are debating taking on technical debt, people normally consider two possible paths, one of which is the "good but expensive" path and the other of which is the "quick and dirty" path. When teams reach that decision point, they often estimate the good path and the quick path. Those estimates will help inform which path the team should choose at that moment. But there are three more issues that should considered.</p>
<p>The first additional issue to be considered is <em>how much it will cost to backfill the good path after you've already gone down the quick path</em>? Backfilling the good path will typically be more expensive than just following the good path in the first place because the work will include ripping out the quick code, making sure you didn't introduce any errors while doing that, then adding the good code and going through the normal test &amp; QA processes. The "ripping out" part makes it cost more to implement the good path later than it would have cost to implement it in the first place. And of course you've already incurred the cost of the quick path, so the real cost is the sum of the quick path + the good path + the cost to rip out the quick path.</p>
<p>If the code is really well designed the "ripping out" cost can be minimal, but I think that's the exception.</p>
<p>The second additional issue that should be considered is the <em>interest payment on the technical debt</em>. I.e., if you choose the quick path now, how much does that slow down other work until you're able to retrofit the good path? The size of the "interest payment" depends very much on the specific case. Sometimes the "interest" is really just the cost of ripping out the quick code and of implementing the good code, which isn't really interest, <em>per se</em>. It's more like a late payment fee. Other times the quick and dirty approach does create ongoing interest payments by making related work in that same area take longer.</p>
<p>This leads us to the third issue that should be considered: <em>Is there a path that is quicker than the good path and that won't affect the rest of the system?</em> In other words, is there a quick path that can be isolated from the rest of the system in such a way that it doesn't create any ongoing interest payment/make other work more difficult? In my experience teams often turn the technical debt decision into a simplistic "two option" decision -- good path vs. quick and dirty path. Pushing through to a third option is important because often the best path is the one that is fairly quick, albeit not as quick as the pure quick and dirty path, and whose adverse affects are better contained than those of the pure quick and dirty path.</p>
<p>With those three options, the decision table for deciding which kind of technical debt to take on could look something like this (assuming a labor cost of $600/staff day):</p>
<span>Option 1: Good Path</span> <p>Immediate cost of Good Solution: 10 staff days<br />Deferred cost to retrofit Good Solution: 0 staff days</p>
<p>Option 1 cost now: $6,000<br />Option 1 cost later: $0<br />Option 1 lifetime cost: $6,000</p>
<span>Option 2: Pure Quick &amp; Dirty Path</span> <p>Immediate cost of Quick &amp; Dirty solution with possible interest payment: 2 staff days<br />Deferred cost to retrofit Good Solution: 12 staff days<br />Estimated cost of "interest payments": 0.5 staff days/month</p>
<p>Option 2 cost now: $1,200<br />Option 2 ongoing cost (interest): $600-$1,800 (assuming good solution is implemented within 6 months)<br />Option 2 cost later: $7,200<br />Option 2 lifetime cost: $9,000-$10,200</p>
<span>Option 3: Quick but not Dirty path </span><p>Immediate cost of Quick &amp; Dirty solution with no interest payment: 3 staff days<br />Deferred cost to retrofit Good Solution: 12 staff days</p>
<p>Option 3 cost now: $1,800<br />Option 3 ongoing cost (interest): $0<br />Option 3 cost later: $7,200<br />Option 3 lifetime cost: $9,000</p>
<p>In this example, either Option 2 or Option 3 is an attractive short-term alternative to Option 1. That is, either $1200 or $1800 is a fraction of the cost/effort of $6000. But if you select Option 2 you saddle yourself with an obligation to revise the code later--either you reimplement the good solution, which costs more, or you keep paying interest, which costs more. When you select Option 3 you introduce the possibility of choosing <em>never </em>to pay off the technical debt, because there isn't any ongoing penalty, and so there isn't any urgency to pay off the debt. </p>
<p><strong>Bottom line: </strong>When facing the prospect of taking on technical debt, be sure to generate more than two design options. Don't oversimplify technical debt decision making to just the two extremes.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-12-12T11:18:00Z</dc:date>
  <content:encoded><![CDATA[<p><em>[This is an expansion of one of my comments on an earlier </em><a title="Technical Debt" href="https://www.construx.com/10x_Software_Development/Technical_Debt/ "><em>Technical Debt post</em></a><em>]</em></p>
<p>When you get to a point where you are debating taking on technical debt, people normally consider two possible paths, one of which is the "good but expensive" path and the other of which is the "quick and dirty" path. When teams reach that decision point, they often estimate the good path and the quick path. Those estimates will help inform which path the team should choose at that moment. But there are three more issues that should considered.</p>
<p>The first additional issue to be considered is <em>how much it will cost to backfill the good path after you've already gone down the quick path</em>? Backfilling the good path will typically be more expensive than just following the good path in the first place because the work will include ripping out the quick code, making sure you didn't introduce any errors while doing that, then adding the good code and going through the normal test &amp; QA processes. The "ripping out" part makes it cost more to implement the good path later than it would have cost to implement it in the first place. And of course you've already incurred the cost of the quick path, so the real cost is the sum of the quick path + the good path + the cost to rip out the quick path.</p>
<p>If the code is really well designed the "ripping out" cost can be minimal, but I think that's the exception.</p>
<p>The second additional issue that should be considered is the <em>interest payment on the technical debt</em>. I.e., if you choose the quick path now, how much does that slow down other work until you're able to retrofit the good path? The size of the "interest payment" depends very much on the specific case. Sometimes the "interest" is really just the cost of ripping out the quick code and of implementing the good code, which isn't really interest, <em>per se</em>. It's more like a late payment fee. Other times the quick and dirty approach does create ongoing interest payments by making related work in that same area take longer.</p>
<p>This leads us to the third issue that should be considered: <em>Is there a path that is quicker than the good path and that won't affect the rest of the system?</em> In other words, is there a quick path that can be isolated from the rest of the system in such a way that it doesn't create any ongoing interest payment/make other work more difficult? In my experience teams often turn the technical debt decision into a simplistic "two option" decision -- good path vs. quick and dirty path. Pushing through to a third option is important because often the best path is the one that is fairly quick, albeit not as quick as the pure quick and dirty path, and whose adverse affects are better contained than those of the pure quick and dirty path.</p>
<p>With those three options, the decision table for deciding which kind of technical debt to take on could look something like this (assuming a labor cost of $600/staff day):</p>
<span>Option 1: Good Path</span> <p>Immediate cost of Good Solution: 10 staff days<br />Deferred cost to retrofit Good Solution: 0 staff days</p>
<p>Option 1 cost now: $6,000<br />Option 1 cost later: $0<br />Option 1 lifetime cost: $6,000</p>
<span>Option 2: Pure Quick &amp; Dirty Path</span> <p>Immediate cost of Quick &amp; Dirty solution with possible interest payment: 2 staff days<br />Deferred cost to retrofit Good Solution: 12 staff days<br />Estimated cost of "interest payments": 0.5 staff days/month</p>
<p>Option 2 cost now: $1,200<br />Option 2 ongoing cost (interest): $600-$1,800 (assuming good solution is implemented within 6 months)<br />Option 2 cost later: $7,200<br />Option 2 lifetime cost: $9,000-$10,200</p>
<span>Option 3: Quick but not Dirty path </span><p>Immediate cost of Quick &amp; Dirty solution with no interest payment: 3 staff days<br />Deferred cost to retrofit Good Solution: 12 staff days</p>
<p>Option 3 cost now: $1,800<br />Option 3 ongoing cost (interest): $0<br />Option 3 cost later: $7,200<br />Option 3 lifetime cost: $9,000</p>
<p>In this example, either Option 2 or Option 3 is an attractive short-term alternative to Option 1. That is, either $1200 or $1800 is a fraction of the cost/effort of $6000. But if you select Option 2 you saddle yourself with an obligation to revise the code later--either you reimplement the good solution, which costs more, or you keep paying interest, which costs more. When you select Option 3 you introduce the possibility of choosing <em>never </em>to pay off the technical debt, because there isn't any ongoing penalty, and so there isn't any urgency to pay off the debt. </p>
<p><strong>Bottom line: </strong>When facing the prospect of taking on technical debt, be sure to generate more than two design options. Don't oversimplify technical debt decision making to just the two extremes.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Technical_Debt/?blogid=23485">
  <title>Technical Debt</title>
  <link>https://www.construx.com/10x_Software_Development/Technical_Debt/?blogid=23485</link>
  <description><![CDATA[<p><span style="font-family: Arial;"></span></p>
<p style="margin-top: 0px; margin-bottom: 0px;"><span>The term technical debt was coined by Ward Cunningham to describe the obligation that a software organization incurs when it chooses a design or construction approach that"s expedient in the short term but that increases complexity and is more costly in the long term. </span></p>
<p><span>Ward didn"t develop the metaphor in very much depth. The few other people who have discussed technical debt seem to use the metaphor mainly to communicate the concept to technical staff. I agree that it"s a useful metaphor for communicating with technical staff, but I"m more interested in the metaphor"s incredibly rich ability to explain a critical technical concept to non-technical project stakeholders. </span></p>
<h2><strong><span>What is Technical Debt? Two Basic Kinds</span></strong></h2>
<p><span><em>The first kind of technical debt is the kind that is incurred unintentionally</em>. For example, a design approach just turns out to be error-prone or a junior programmer just writes bad code. This technical debt is the non-strategic result of doing a poor job. In some cases, this kind of debt can be incurred unknowingly, for example, your company might acquire a company that has accumulated significant technical debt that you don"t identify until after the acquisition. Sometimes, ironically, this debt can be created when a team stumbles in its efforts to rewrite a debt-laden platform and inadvertently creates more debt. We"ll call this general category of debt Type I.</span></p>
<p><span><em>The second kind of technical debt is the kind that is incurred intentionally</em>. This commonly occurs when an organization makes a conscious decision to optimize for the present rather than for the future. "If we don"t get this release done on time, there won"t be a next release" is a common refrain&amp;mdash;and often a compelling one. This leads to decisions like, "We don"t have time to reconcile these two databases, so we"ll write some glue code that keeps them synchronized for now and reconcile them after we ship." Or "We have some code written by a contractor that doesn"t follow our coding standards; we"ll clean that up later." Or "We didn"t have time to write all the unit tests for the code we wrote the last 2 months of the project. We"ll right those tests after the release." (We"ll call this Type II.)</span></p>
<p><span>The rest of my comments focus on the kind of technical debt that"s incurred for strategic reasons (Type II). </span></p>
<h2><strong><span>Short-Term vs. Long-Term Debt</span></strong></h2>
<p><span>With real debt, a company will maintain both short-term and long-term debt. You use short-term debt to cover things like gaps between your receivables (payments from customers) and expenses (payroll). You take on short term debt when you have the money, you just don"t have it <em>now</em>. Short-term debt is expected to be paid off frequently. The technical equivalent seems straightforward. Short-term debt is the debt that"s taken on <em>tactically and reactively</em>, usually as a late-stage measure to get a specific release out the door. (We"ll call this Type II.A.)</span></p>
<p><span>Long term debt is the debt a company takes on <em>strategically and proactively<span></span>--investing in new capital equipment, like a new factory, or a new corporate campus. Again, the technical equivalent seems straightforward: "We don"t think we"re going to need to support a second platform for at least five years, so this release can be built on the assumption that we"re supporting only one platform." (We"ll call this Type II.B.)</em></span></p>
<p><span>The implication is that short-term debt should be paid off quickly, perhaps as the first part of the next release cycle, whereas long-term debt can be carried for a few years or longer. </span></p>
<h2><strong><span>Incurring Technical Debt</span></strong></h2>
<p><span>When technical debt is incurred for strategic reasons, the fundamental reason is always that the cost of development work today is seen as more expensive than the cost will be in the future. This can be true for any of several reasons.</span></p>
<p><span><em>Time to Market</em><strong>. </strong>When time to market is critical, incurring an extra $1 in development might equate to a loss of $10 in revenue. Even if the development cost for the same work rises to $5 later, incurring the $1 debt now is a good business decision. </span></p>
<p><span><em>Preservation of Startup Capital</em>. In a startup environment you have a fixed amount of seed money, and every dollar counts. If you can delay an expense for a year or two you can pay for that expense out of a greater amount of money later rather than out of precious startup funds now. </span></p>
<p><span><em>Delaying Development Expense</em><strong>. </strong>When a system is retired, all of the system"s technical debt is retired with it. Once a system has been taken out of production, there"s no difference between a "clean and correct" solution and a "quick and dirty" solution. Unlike financial debt, when a system is retired all its technical debt is retired with it. Consequently near the end of a system"s service life it becomes increasingly difficult to cost-justify investing in anything other than what"s most expedient. </span></p>
<h3><span style="font-style: normal;"><strong><span>Be Sure You Are Incurring The Right Kind of Technical Debt</span></strong></span></h3>
<p><span>Some debt is taken on in large chunks: "We don"t have time to implement this the right way; just hack it in and we"ll fix it after we ship." Conceptually this is like buying a car&amp;mdash;it"s a large debt that can be tracked and managed. (We"ll call this Type II.A.1.)</span></p>
<p><span>Other debt accumulates from taking hundreds or thousands of small shortcuts--generic variable names, sparse comments, creating one class in a case where you should create two, not following coding conventions, and so on. This kind of debt is like credit card debt. It"s easy to incur unintentionally, it adds up faster than you think, and it"s harder to track and manage after it has been incurred. (We"ll call this Type II.A.2.)</span></p>
<p><span>Both of these kinds of debt are commonly incurred in response to the directive to "Get it out the door as quickly as possible." However, the second kind (II.A.2) doesn"t pay off even in the short term of an initial development cycle and should be avoided. </span></p>
<h2><strong><span>Debt Service</span></strong></h2>
<p><span>One of the important implications of technical debt is that it must be <em>serviced</em>, i.e., once you incur a debt there will be interest charges. </span></p>
<p><span>If the debt grows large enough, eventually the company will spend more on servicing its debt than it invests in increasing the value of its other assets. A common example is a legacy code base in which so much work goes into keeping a production system running (i.e., "servicing the debt") that there is little time left over to add new capabilities to the system. With financial debt, analysts talk about the "debt ratio," which is equal to total debt divided by total assets. Higher debt ratios are seen as more risky, which seems true for technical debt, too. </span></p>
<h2><strong><span>Attitudes Toward Technical Debt</span></strong></h2>
<p><span>Like financial debt, different companies have different philosophies about the usefulness of debt. Some companies want to avoid taking on any debt at all; others see debt as a useful tool and just want to know how to use debt wisely. </span></p>
<p><span>I"ve found that business staff generally seems to have a higher tolerance for technical debt than technical staff does. Business executives tend to want to understand the tradeoffs involved, whereas some technical staff seem to believe that the only correct amount of technical debt is <em>zero</em>. </span></p>
<p><span>The reason most often cited by technical staff for avoiding debt altogether is the challenge of communicating the existence of technical debt to business staff and the challenge of helping business staff remember the implications of the technical debt that has previously been incurred. Everyone agrees that it"s a good idea to incur debt late in a release cycle, but business staff can sometimes resist accounting for the time needed to pay off the debt on the next release cycle. The main issue seems to be that, unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it. </span></p>
<p><span><strong>How do You Make an Organization"s Debt Load More Visible?</strong></span></p>
<p><span>One organization we"ve worked with maintains a debt list within its defect tracking system. Each time a debt is incurred, the tasks needed to pay off that debt are entered into the system along with an estimated effort and schedule. The debt backlog is then tracked, and any unresolved debt more than 90 days old is treated as critical. </span></p>
<p><span>Another organization maintains its debt list as part of its Scrum product backlog, with similar estimates of effort required to pay off each debt. </span></p>
<p><span>Either of these approaches can be used to increase visibility into the debt load and into the debt service work that needs to occur within or across release cycles. Each also provides a useful safeguard against accumulating the "credit card debt" of a mountain of tiny shortcuts mentioned earlier. You can simply tell the team, "If the shortcut you are considering taking is too minor to add to the debt-service defect list/product backlog, then it"s too minor to make a difference; don"t take that shortcut. We only want to take shortcuts that we can track and repair later." </span></p>
<h2><strong><span>Ability to Take on Debt Safely Varies</span></strong></h2>
<p><span>Different teams will have different technical debt credit ratings. The credit rating reflects a team"s ability to pay off technical debt after it has been incurred. </span></p>
<p><span>A key factor in ability to pay off technical debt is the level of debt a team takes on unintentionally, i.e., how much of its debt is Type I? The less debt a team creates for itself through unintentional low-quality work, the more debt a team can safely absorb for strategic reasons. This is true regardless of whether we"re talking about taking on Type I vs. Type II debt or whether we"re talking about taking on Type II.A.1 vs. Type II.A.2 debt. </span></p>
<p><span>One company tracks debt vs. team velocity. Once a team"s velocity begins to drop as a result of servicing its technical debt, the team focuses on reducing its debt until its velocity recovers. Another approach is to track rework, and use that as a measure of how much debt a team is accumulating. </span></p>
<h2><strong><span>Retiring Debt</span></strong></h2>
<p><span>"Working off debt" can be motivational and good for team morale. A good approach when short-term debt has been incurred is to take the first development iteration after a release and devote that to paying off short-term technical debt. </span></p>
<p><span>The ability to pay off debt depends at least in part on the kind of software the team is working on. If a team incurs short-term debt on a web application, a new release can easily be rolled up after the team backfills its debt-reduction work. If a team incurs short-term debt in avionics firmware&amp;mdash; the pay off of which which requires replacing a box on an airplane&amp;mdash; that team should have a higher bar for taking on <em>any </em>short-term debt. This is like a minimum payment--if your minimum payment is 3% of your balance, that"s no problem. If the minimum payment is $1000 regardless of your balance, you"d think hard about taking on any debt at all. </span></p>
<h2><strong><span>Communicating about Technical Debt</span></strong></h2>
<p><span>The technical debt vocabulary provides a way to communicate with non-technical staff in an area that has traditionally suffered from a lack of transparency. Shifting the dialog from a technical vocabulary to a financial vocabulary provides a clearer, more understandable framework for these discussions. Although the technical debt terminology is not currently in widespread use, I"ve found that it resonates immediately with every executive I"ve presented it to as well as other non-technical stakeholders. It also makes sense to technical staff who are often all-too-aware of the debt load their organization is carrying. </span></p>
<p><span>Here are some suggestions for communicating about debt with non-technical stakeholders:</span></p>
<p><span><em>Use an organization"s maintenance budget as a rough proxy for it"s technical debt service load.</em> However you will need to differentiate between maintenance that keeps a production system running vs. maintenance that extends the capabilities of a production system. Only the first category counts as technical debt. </span></p>
<p><span><em>Discuss debt in terms of money rather than in terms of features. </em>For example, "40% of our current R&amp;D budget is going into supporting previous releases" or "We"re currently spending $2.3 million per year servicing our technical debt." </span></p>
<p><span><em>Be sure you"re taking on the right kind of debt. </em>Not all debts are equal. Some debts are the result of good business decisions; others are the result of sloppy technical practices or bad communication about what debt the business intends to take on. The only kinds that are really healthy are Types II.A.1 and II.B. </span></p>
<p><span>Treat the discussion about debt as an ongoing dialog rather than a single discussion. You might need several discussions before the nuances of the metaphor fully sink in. </span></p>
<h2><strong><span>Technical Debt Taxonomy</span></strong></h2>
<p>Here"s a summary of the kinds of technical debt:</p>
<p><em>Non Debt</em></p>
<p>   Feature backlog, deferred features, cut features, etc. Not all incomplete work is debt. These aren"t debt, because they don"t require interest payments. </p>
<p><em><span>Debt</span></em></p>
<p><span>   I. Debt incurred unintentionally due to low quality work</span></p>
<p><span>   II. Debt incurred intentionally</span></p>
<p><span>      II.A. Short-term debt, usually incurred reactively, for tactical reasons</span></p>
<p><span>         II.A.1. Individually identifiable shortcuts (like a car loan)</span></p>
<p><span>         II.A.2. Numerous tiny shortcuts (like credit card debt)</span></p>
<p><span>      II.B. Long-term debt, usually incurred proactively, for strategic reasons</span></p>
<p><span></span></p>
<p><strong><span>Summary</span></strong></p>
<p>What do you think? Do you like the technical debt metaphor? Do you think it"s a useful way to communicate the implications of technical/business decision making to non-technical project stakeholders? What"s your experience? I look forward to your thoughts. </p>
<p><strong><span>Resources</span></strong></p>
<ul>
<li><p>Ward Cunningham"s <a href="http://c2.com/doc/oopsla92.html">OOPSLA "92 Experience Report</a> that first mentions technical debt. </p>
</li>
<li><p>Martin Fowler"s brief <a href="http://www.martinfowler.com/bliki/TechnicalDebt.html">bliki entry</a> about technical debt. </p>
</li>
<li><p>c2 wiki discussions of <a href="http://www.c2.com/cgi/wiki?ComplexityAsDebt">Complexity As Debt</a> and <a href="http://www.c2.com/cgi/wiki?TechnicalDebt">Technical Debt</a>.</p>
</li>
</ul>
<p><span style="color: rgb(255, 250, 250);">&lt;</span></p>
<p>Check out the Bellevue School District blog: <a href="http://www.bellevueschools.net"><span style="color: rgb(255, 250, 250);">http://www.bellevueschools.net.</span></a>  </p>
<p> </p>
<p> </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-11-01T13:06:00Z</dc:date>
  <content:encoded><![CDATA[<p>The term technical debt was coined by Ward Cunningham to describe the obligation that a software organization incurs when it chooses a design or construction approach that's expedient in the short term but that increases complexity and is more costly in the long term.</p>
<p>Ward didn't develop the metaphor in very much depth. The few other people who have discussed technical debt seem to use the metaphor mainly to communicate the concept to technical staff. I agree that it's a useful metaphor for communicating with technical staff, but I'm more interested in the metaphor's incredibly rich ability to explain a critical technical concept to non-technical project stakeholders.</p>
<span>What is Technical Debt? Two Basic Kinds</span><p><em>The first kind of technical debt is the kind that is incurred unintentionally</em>. For example, a design approach just turns out to be error-prone or a junior programmer just writes bad code. This technical debt is the non-strategic result of doing a poor job. In some cases, this kind of debt can be incurred unknowingly, for example, your company might acquire a company that has accumulated significant technical debt that you don't identify until after the acquisition. Sometimes, ironically, this debt can be created when a team stumbles in its efforts to rewrite a debt-laden platform and inadvertently creates more debt. We'll call this general category of debt Type I.</p>
<p><em>The second kind of technical debt is the kind that is incurred intentionally</em>. This commonly occurs when an organization makes a conscious decision to optimize for the present rather than for the future. "If we don't get this release done on time, there won't be a next release" is a common refrain&amp;mdash;and often a compelling one. This leads to decisions like, "We don't have time to reconcile these two databases, so we'll write some glue code that keeps them synchronized for now and reconcile them after we ship." Or "We have some code written by a contractor that doesn't follow our coding standards; we'll clean that up later." Or "We didn't have time to write all the unit tests for the code we wrote the last 2 months of the project. We'll right those tests after the release." (We'll call this Type II.)</p>
<p>The rest of my comments focus on the kind of technical debt that's incurred for strategic reasons (Type II).</p>
<span>Short-Term vs. Long-Term Debt</span><p>With real debt, a company will maintain both short-term and long-term debt. You use short-term debt to cover things like gaps between your receivables (payments from customers) and expenses (payroll). You take on short term debt when you have the money, you just don't have it <em>now</em>. Short-term debt is expected to be paid off frequently. The technical equivalent seems straightforward. Short-term debt is the debt that's taken on <em>tactically and reactively</em>, usually as a late-stage measure to get a specific release out the door. (We'll call this Type II.A.)</p>
<p>Long term debt is the debt a company takes on <em>strategically and proactively--investing in new capital equipment, like a new factory, or a new corporate campus. Again, the technical equivalent seems straightforward: "We don't think we're going to need to support a second platform for at least five years, so this release can be built onthe assumption that we're supporting only one platform." (We'll call this Type II.B.)</em></p>
<p>The implication is that short-term debt should be paid off quickly, perhaps as the first part of the next release cycle, whereas long-term debt can be carried for a few years or longer.</p>
<span>Incurring Technical Debt</span><p>When technical debt is incurred for strategic reasons, the fundamental reason is always that the cost of development work today is seen as more expensive than the cost will be in the future. This can be true for any of several reasons.</p>
<p><em>Time to Market</em><strong>. </strong>When time to market is critical, incurring an extra $1 in development might equate to a loss of $10 in revenue. Even if the development cost for the same work rises to $5 later, incurring the $1 debt now is a good business decision.</p>
<p><em>Preservation of Startup Capital</em>. In a startup environment you have a fixed amount of seed money, and every dollar counts. If you can delay an expense for a year or two you can pay for that expense out of a greater amount of money later rather than out of precious startup funds now.</p>
<p><em>Delaying Development Expense</em><strong>. </strong>When a system is retired, all of the system's technical debt is retired with it. Once a system has been taken out of production, there's no difference between a "clean and correct" solution and a "quick and dirty" solution. Unlike financial debt, when a system is retired all its technical debt is retired with it. Consequently near the end of a system's service life it becomes increasingly difficult to cost-justify investing in anything other than what's most expedient.</p>
<span>Be Sure You Are Incurring The Right Kind of Technical Debt</span><p>Some debt is taken on in large chunks: "We don't have time to implement this the right way; just hack it in and we'll fix it after we ship." Conceptually this is like buying a car&amp;mdash;it's a large debt that can be tracked and managed. (We'll call this Type II.A.1.)</p>
<p>Other debt accumulates from taking hundreds or thousands of small shortcuts--generic variable names, sparse comments, creating one class in a case where you should create two, not following coding conventions, and so on. This kind of debt is like credit card debt. It's easy to incur unintentionally, it adds up faster than you think, and it's harder to track and manage after it has been incurred. (We'll call this Type II.A.2.)</p>
<p>Both of these kinds of debt are commonly incurred in response to the directive to "Get it out the door as quickly as possible." However, the second kind (II.A.2) doesn't pay off even in the short term of an initial development cycle and should be avoided.</p>
<span>Debt Service</span><p>One of the important implications of technical debt is that it must be <em>serviced</em>, i.e., once you incur a debt there will be interest charges.</p>
<p>If the debt grows large enough, eventually the company will spend more on servicing its debt than it invests in increasing the value of its other assets. A common example is a legacy code base in which so much work goes into keeping a production system running (i.e., "servicing the debt") that there is little time left over to add new capabilities to the system. With financial debt, analysts talk about the "debt ratio," which is equal to total debt divided by total assets. Higher debt ratios are seen as more risky, which seems true for technical debt, too.</p>
<span>Attitudes Toward Technical Debt</span><p>Like financial debt, different companies have different philosophies about the usefulness of debt. Some companies want to avoid taking on any debt at all; others see debt as a useful tool and just want to know how to use debt wisely.</p>
<p>I've found that business staff generally seems to have a higher tolerance for technical debt than technical staff does. Business executives tend to want to understand the tradeoffs involved, whereas some technical staff seem to believe that the only correct amount of technical debt is <em>zero</em>.</p>
<p>The reason most often cited by technical staff for avoiding debt altogether is the challenge of communicating the existence of technical debt to business staff and the challenge of helping business staff remember the implications of the technical debt that has previously been incurred. Everyone agrees that it's a good idea to incur debt late in a release cycle, but business staff can sometimes resist accounting for the time needed to pay off the debt on the next release cycle. The main issue seems to be that, unlike financial debt, technical debt is much less visible, and so people have an easier time ignoring it.</p>
<span>How do You Make an Organization's Debt Load More Visible?</span><p>One organization we've worked with maintains a debt list within its defect tracking system. Each time a debt is incurred, the tasks needed to pay off that debt are entered into the system along with an estimated effort and schedule. The debt backlog is then tracked, and any unresolved debt more than 90 days old is treated as critical.</p>
<p>Another organization maintains its debt list as part of its Scrum product backlog, with similar estimates of effort required to pay off each debt.</p>
<p>Either of these approaches can be used to increase visibility into the debt load and into the debt service work that needs to occur within or across release cycles. Each also provides a useful safeguard against accumulating the "credit card debt" of a mountain of tiny shortcuts mentioned earlier. You can simply tell the team, "If the shortcut you are considering taking is too minor to add to the debt-service defect list/product backlog, then it's too minor to make a difference; don't take that shortcut. We only want to take shortcuts that we can track and repair later."</p>
<span>Ability to Take on Debt Safely Varies</span><p>Different teams will have different technical debt credit ratings. The credit rating reflects a team's ability to pay off technical debt after it has been incurred.</p>
<p>A key factor in ability to pay off technical debt is the level of debt a team takes on unintentionally, i.e., how much of its debt is Type I? The less debt a team creates for itself through unintentional low-quality work, the more debt a team can safely absorb for strategic reasons. This is true regardless of whether we're talking about taking on Type I vs. Type II debt or whether we're talking about taking on Type II.A.1 vs. Type II.A.2 debt.</p>
<p>One company tracks debt vs. team velocity. Once a team's velocity begins to drop as a result of servicing its technical debt, the team focuses on reducing its debt until its velocity recovers. Another approach is to track rework, and use that as a measure of how much debt a team is accumulating.</p>
<span>Retiring Debt</span><p>"Working off debt" can be motivational and good for team morale. A good approach when short-term debt has been incurred is to take the first development iteration after a release and devote that to paying off short-term technical debt.</p>
<p>The ability to pay off debt depends at least in part on the kind of software the team is working on. If a team incurs short-term debt on a web application, a new release can easily be rolled up after the team backfills its debt-reduction work. If a team incurs short-term debt in avionics firmware&amp;mdash; the pay off of which requires replacing a box on an airplane&amp;mdash; that team should have a higher bar for taking on <em>any </em>short-term debt. This is like a minimum payment--if your minimum payment is 3% of your balance, that's no problem. If the minimum payment is $1000 regardless of your balance, you'd think hard about taking on any debt at all.</p>
<span>Communicating about Technical Debt</span><p>The technical debt vocabulary provides a way to communicate with non-technical staff in an area that has traditionally suffered from a lack of transparency. Shifting the dialog from a technical vocabulary to a financial vocabulary provides a clearer, more understandable framework for these discussions. Although the technical debt terminology is not currently in widespread use, I've found that it resonates immediately with every executive I've presented it to as well as other non-technical stakeholders. It also makes sense to technical staff who are often all-too-aware of the debt load their organization is carrying.</p>
<p>Here are some suggestions for communicating about debt with non-technical stakeholders:</p>
<p><em>Use an organization's maintenance budget as a rough proxy for it's technical debt service load.</em> However you will need to differentiate between maintenance that keeps a production system running vs. maintenance that extends the capabilities of a production system. Only the first category counts as technical debt.</p>
<p><em>Discuss debt in terms of money rather than in terms of features. </em>For example, "40% of our current R&amp;D budget is going into supporting previous releases" or "We're currently spending $2.3 million per year servicing our technical debt."</p>
<p><em>Be sure you're taking on the right kind of debt. </em>Not all debts are equal. Some debts are the result of good business decisions; others are the result of sloppy technical practices or bad communication about what debt the business intends to take on. The only kinds that are really healthy are Types II.A.1 and II.B.</p>
<p>Treat the discussion about debt as an ongoing dialog rather than a single discussion. You might need several discussions before the nuances of the metaphor fully sink in.</p>
<span>Technical Debt Taxonomy</span><p>Here's a summary of the kinds of technical debt:</p>
<p><em>Non Debt</em></p>
<p>Feature backlog, deferred features, cut features, etc. Not all incomplete work is debt. These aren't debt, because they don't require interest payments.</p>
<p><em><span>Debt</span></em></p>
<p>I. Debt incurred unintentionally due to low quality work</p>
<p>II. Debt incurred intentionally</p>
<p>II.A. Short-term debt, usually incurred reactively, for tactical reasons</p>
<p>II.A.1. Individually identifiable shortcuts (like a car loan)</p>
<p>II.A.2. Numerous tiny shortcuts (like credit card debt)</p>
<p>II.B. Long-term debt, usually incurred proactively, for strategic reasons</p>
<span>Summary</span><p>What do you think? Do you like the technical debt metaphor? Do you think it's a useful way to communicate the implications of technical/business decision making to non-technical project stakeholders? What's your experience? I look forward to your thoughts.</p>
<span>Resources</span><ul>
<li>Ward Cunningham's <a href="http://c2.com/doc/oopsla92.html">OOPSLA '92 Experience Report</a> that first mentions technical debt. </li>
<li>Martin Fowler's brief <a href="http://www.martinfowler.com/bliki/TechnicalDebt.html">bliki entry</a> about technical debt. </li>
<li>c2 wiki discussions of <a href="http://www.c2.com/cgi/wiki?ComplexityAsDebt">Complexity As Debt</a> and <a href="http://www.c2.com/cgi/wiki?TechnicalDebt">Technical Debt</a>. </li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/5_Questions_on_Agile_Development/?blogid=23485">
  <title>5 Questions on Agile Development</title>
  <link>https://www.construx.com/10x_Software_Development/5_Questions_on_Agile_Development/?blogid=23485</link>
  <description><![CDATA[<p><em>PM*Boulevard</em> interviewed me earlier this summer about Agile development. Below I've excerpted the <em>PM*Boulevard </em>interview, updated some of my answers, and added a little additional commentary.</p>
<p><strong><font size="3">5Qs on Agile with Steve McConnell</font><br /></strong>   <br />Readers of <em>Software Development </em>magazine once named Steve McConnell one of the three most influential people in the software industry. The CEO and Chief Software Engineer at Construx Software, Steve has generously agreed to kick off our "5Qs on Agile" feature by answering the following five often-asked questions about Agile development.</p>
<p><strong>Q1: Why use Agile methods? </strong><br />Agile methods focus on shorter iterations, in which the software is brought to a releasable level of quality fairly often, usually somewhere between weekly and monthly. Short iterations provide numerous technical and management benefits. On the technical side, the main benefit is reduced integration risk because the amount of software being integrated is kept small. Short iterations also help to keep quality under control by driving to a releasable state frequently, which prevents a project from accumulating a large backlog of defect correction work. On the management side, the frequent iterations provide frequent evidence of progress, which tends to lead to good status visibility, good customer relations, and good team morale.</p>
<p>Agile methods also usually treat requirements as more dynamic than traditional methods do. For some environments that's a plus and for some it's a minus. If you're working in an environment that doesn't need a lot of long range predictability in its feature set, treating requirements dynamically can save a lot of detailed requirements specification work and avoid the "requirements spoilage" that often goes along with working through a lengthy backlog of detailed requirements.</p>
<p><strong>Q2: What is the biggest challenge when implementing Agile methods? <br /></strong>The biggest challenge we see in our consulting and training business is walking the walk. You can't just say you're doing Agile. You have to follow through with specific actions. Of course that's the same problem we saw years ago with object oriented methods, and before that with structured methods, so that problem isn't unique to Agile.</p>
<p>The most common specific challenges we see are simply the challenges of "turning the battleship" in a large organization to overcome the inertia of entrenched work practices and expectations and getting reoriented to do things in a different way. You have to muster the resolve to actually work in short iterations. You have to build frequently, at least every day, and you have to develop the discipline to keep the build healthy. You have to push each iteration to a releasable level of quality even if that's hard to do at first. As before, this problem isn't unique to Agile. If we're working with an organization and find that their biggest need is to do a better job of defining requirements up front (which isn't very agile), "turning the battleship" to define better requirements up front will be just as hard.</p>
<p><strong>Q3: In what environments will Agile be most successful? <br /></strong>Full-blown Agile seems to me to be best suited for environments in which the budget is fixed on an annual basis, team sizes are fixed on an annual basis (because of the budget), and the project staff's mission is to deliver the most valuable business functionality that they can deliver in a year's time with a fixed team size. This mostly describes in-house, business systems dev organizations.</p>
<p>Full-blown agile (especially the flexible requirements part) is less-well suited to companies that sell software, because maintaining a lot of requirements flexibility runs counter to the goal of providing mid-term and long-term predictability. We've found that many organizations value predictability more than they value requirements flexibility. That is, they value the ability to make commitments to key customers or to the marketplace more than they value the ability to "respond to change."</p>
<p>For anything less than full-blown Agile, however, we find that many agile practices are well-suited to the vast majority of environments. Short development iterations are nearly always valuable, regardless of whether you define 5% of your requirements up front or 95% up front. Keeping the software close to a releasable level of quality at all times is virtually always valuable. Scrum as a management style and discipline seems to be very broadly applicable. Small, empowered teams are nearly always useful. I go into more detail on the detailed strengths and weaknesses of specific agile development practices in my executive presentation on <a href="/Thought_Leadership/Events/Practical_benefits_profound_results/">The Legacy of Agile Development</a>.</p>
<p><strong>Q4: What is the future of Agile? <br /></strong>Agile has largely become a synonym for "modern software practices that work," so I think the future of Agile with a capital "A" is the same as the past of Object Oriented or Structured. We rarely talk about Object Oriented programming anymore; it's just programming. Similarly, I think Agile has worked its way into the mainstream such that within the next few years we won't talk about Agile much anymore; we'll just talk about programming, and it will be assumed that everyone means Agile whenever that's appropriate.</p>
<p><strong>Q5: Can you recommend a book, blog, podcast, Web site, or other information source to our readers that you find interesting or intriguing right now? <br /></strong>I'm most excited about the Software Development Best Practices discussion forum that we launched a few weeks ago. That's at <a href="/Blogs/10x_Software_Development/?id=15082">http://www.construx.com/</a> . I also started blogging recently, and you can read my blog at <a href="/Blogs/10x_Software_Development/?id=15082">http://www.construx.com/Blogs/10x_Software_Development/?id=15082</a> . Feel free to contact me by e-mail at <a href="mailto:stevemcc@construx.com">stevemcc@construx.com</a>. </p>
<p><strong><font size="3">Additional Commentary (October 2007)</font></strong><br />After this interview posted back in August 2007 I received an interesting email that said, "Wow, you seem really pro-Agile now. What happened?"</p>
<p>I was surprised at that email because I didn't think my comments in the 5Qs were especially "pro agile." I thought they emphasized the strengths of agile and also some of the common failure modes. Another reason that comment was interesting was the hint that I'd been "anti-agile" before. I've never been either pro-agile or anti-agile -- I've always been pro-whatever-practices-work-best. In many situations the practices that work best are the practices that today are associated with agile development. And in some circumstances, other older practices still work best.</p>
<p>So I'm not pro-agile or anti-agile. I'm not pro-CMM or anti-CMM, pro-BDUF or anti-BDUF, or pro-pair programming or anti-pair programming. For that matter I'm not even pro-waterfall or anti-waterfall. At Construx we've worked with such a tremendous variety of companies over the years that we've encountered at least one environment in which <em>each </em>of these practices will work best. The big wide world of software projects is amazingly diverse, and that calls for software development practices that are just as amazingly diverse. Agile has simply added more tools to the toolbox so that we have a richer set from which to choose the right tool for any particular job.</p>
<span>Resources</span><ul>
<li>Legacy of Agile Development <a href="/Thought_Leadership/Events/Practical_benefits_profound_results/">Executive presentation</a>  </li>
<li>PM*Boulevard's original <a href="http://www.pmboulevard.com/Default.aspx?page=View%20Content&amp;cid=2334&amp;parent=5970">5Qs about agile interview</a> with me. (Did the original interview really seem rabidly pro agile?) </li>
<li>Other PM*Boulevard <a href="http://www.pmboulevard.com/Default.aspx?page=Agile&amp;cid=2334">5Q interviews</a> about Agile</li>
<li>Construx's <a href="/Seminars/?dm=1">How to Be Agile Without Being Extreme</a> seminar</li>
<li>Construx's <a href="/Thought_Leadership/Events/Practical_benefits_profound_results/">Agile Practices Review/Agile Adoption</a></li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-10-08T12:37:00Z</dc:date>
  <content:encoded><![CDATA[<p><em>PM*Boulevard</em> interviewed me earlier this summer about Agile development. Below I've excerpted the <em>PM*Boulevard </em>interview, updated some of my answers, and added a little additional commentary.</p>
<p><strong><font size="3">5Qs on Agile with Steve McConnell</font><br /></strong>   <br />Readers of <em>Software Development </em>magazine once named Steve McConnell one of the three most influential people in the software industry. The CEO and Chief Software Engineer at Construx Software, Steve has generously agreed to kick off our "5Qs on Agile" feature by answering the following five often-asked questions about Agile development.</p>
<p><strong>Q1: Why use Agile methods? </strong><br />Agile methods focus on shorter iterations, in which the software is brought to a releasable level of quality fairly often, usually somewhere between weekly and monthly. Short iterations provide numerous technical and management benefits. On the technical side, the main benefit is reduced integration risk because the amount of software being integrated is kept small. Short iterations also help to keep quality under control by driving to a releasable state frequently, which prevents a project from accumulating a large backlog of defect correction work. On the management side, the frequent iterations provide frequent evidence of progress, which tends to lead to good status visibility, good customer relations, and good team morale.</p>
<p>Agile methods also usually treat requirements as more dynamic than traditional methods do. For some environments that's a plus and for some it's a minus. If you're working in an environment that doesn't need a lot of long range predictability in its feature set, treating requirements dynamically can save a lot of detailed requirements specification work and avoid the "requirements spoilage" that often goes along with working through a lengthy backlog of detailed requirements.</p>
<p><strong>Q2: What is the biggest challenge when implementing Agile methods? <br /></strong>The biggest challenge we see in our consulting and training business is walking the walk. You can't just say you're doing Agile. You have to follow through with specific actions. Of course that's the same problem we saw years ago with object oriented methods, and before that with structured methods, so that problem isn't unique to Agile.</p>
<p>The most common specific challenges we see are simply the challenges of "turning the battleship" in a large organization to overcome the inertia of entrenched work practices and expectations and getting reoriented to do things in a different way. You have to muster the resolve to actually work in short iterations. You have to build frequently, at least every day, and you have to develop the discipline to keep the build healthy. You have to push each iteration to a releasable level of quality even if that's hard to do at first. As before, this problem isn't unique to Agile. If we're working with an organization and find that their biggest need is to do a better job of defining requirements up front (which isn't very agile), "turning the battleship" to define better requirements up front will be just as hard.</p>
<p><strong>Q3: In what environments will Agile be most successful? <br /></strong>Full-blown Agile seems to me to be best suited for environments in which the budget is fixed on an annual basis, team sizes are fixed on an annual basis (because of the budget), and the project staff's mission is to deliver the most valuable business functionality that they can deliver in a year's time with a fixed team size. This mostly describes in-house, business systems dev organizations.</p>
<p>Full-blown agile (especially the flexible requirements part) is less-well suited to companies that sell software, because maintaining a lot of requirements flexibility runs counter to the goal of providing mid-term and long-term predictability. We've found that many organizations value predictability more than they value requirements flexibility. That is, they value the ability to make commitments to key customers or to the marketplace more than they value the ability to "respond to change."</p>
<p>For anything less than full-blown Agile, however, we find that many agile practices are well-suited to the vast majority of environments. Short development iterations are nearly always valuable, regardless of whether you define 5% of your requirements up front or 95% up front. Keeping the software close to a releasable level of quality at all times is virtually always valuable. Scrum as a management style and discipline seems to be very broadly applicable. Small, empowered teams are nearly always useful. I go into more detail on the detailed strengths and weaknesses of specific agile development practices in my executive presentation on <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">The Legacy of Agile Development</a>.</p>
<p><strong>Q4: What is the future of Agile? <br /></strong>Agile has largely become a synonym for "modern software practices that work," so I think the future of Agile with a capital "A" is the same as the past of Object Oriented or Structured. We rarely talk about Object Oriented programming anymore; it's just programming. Similarly, I think Agile has worked its way into the mainstream such that within the next few years we won't talk about Agile much anymore; we'll just talk about programming, and it will be assumed that everyone means Agile whenever that's appropriate.</p>
<p><strong>Q5: Can you recommend a book, blog, podcast, Web site, or other information source to our readers that you find interesting or intriguing right now? <br /></strong>I'm most excited about the Software Development Best Practices discussion forum that we launched a few weeks ago. That's at <a href="https://www.construx.com/Blogs/10x_Software_Development/?id=15082">http://www.construx.com/</a> . I also started blogging recently, and you can read my blog at <a href="https://www.construx.com/Blogs/10x_Software_Development/?id=15082">http://www.construx.com/Blogs/10x_Software_Development/?id=15082</a> . Feel free to contact me by e-mail at <a href="mailto:stevemcc@construx.com">stevemcc@construx.com</a>. </p>
<p><strong><font size="3">Additional Commentary (October 2007)</font></strong><br />After this interview posted back in August 2007 I received an interesting email that said, "Wow, you seem really pro-Agile now. What happened?"</p>
<p>I was surprised at that email because I didn't think my comments in the 5Qs were especially "pro agile." I thought they emphasized the strengths of agile and also some of the common failure modes. Another reason that comment was interesting was the hint that I'd been "anti-agile" before. I've never been either pro-agile or anti-agile -- I've always been pro-whatever-practices-work-best. In many situations the practices that work best are the practices that today are associated with agile development. And in some circumstances, other older practices still work best.</p>
<p>So I'm not pro-agile or anti-agile. I'm not pro-CMM or anti-CMM, pro-BDUF or anti-BDUF, or pro-pair programming or anti-pair programming. For that matter I'm not even pro-waterfall or anti-waterfall. At Construx we've worked with such a tremendous variety of companies over the years that we've encountered at least one environment in which <em>each </em>of these practices will work best. The big wide world of software projects is amazingly diverse, and that calls for software development practices that are just as amazingly diverse. Agile has simply added more tools to the toolbox so that we have a richer set from which to choose the right tool for any particular job.</p>
<span>Resources</span><ul>
<li>Legacy of Agile Development <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">Executive presentation</a>  </li>
<li>PM*Boulevard's original <a href="http://www.pmboulevard.com/Default.aspx?page=View%20Content&amp;cid=2334&amp;parent=5970">5Qs about agile interview</a> with me. (Did the original interview really seem rabidly pro agile?) </li>
<li>Other PM*Boulevard <a href="http://www.pmboulevard.com/Default.aspx?page=Agile&amp;cid=2334">5Q interviews</a> about Agile</li>
<li>Construx's <a href="https://www.construx.com/Seminars/?dm=1">How to Be Agile Without Being Extreme</a> seminar</li>
<li>Construx's <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">Agile Practices Review/Agile Adoption</a></li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Building_a_Fort__Lessons_in_Software_Estimation/?blogid=23485">
  <title>Building a Fort: Lessons in Software Estimation</title>
  <link>https://www.construx.com/10x_Software_Development/Building_a_Fort__Lessons_in_Software_Estimation/?blogid=23485</link>
  <description><![CDATA[<span>Also Known as: How I Spent My Summer Vacation</span><p>My big project this summer was building a fort for my kids. I'd wanted to build a clubhouse or treehouse or fort or something for the past few years, but we didn't have a good place to put it. Then while clearing some blackberries in the spring I discovered that our property extended about 20 feet further into the adjacent overgrown area than I had thought, and that was the perfect place for a fort.</p>
<p>Whenever I do a physical construction project like this I try to pay attention to which attributes of the project are similar to software projects and which are different. The comparisons are made more challenging by the fact that my construction projects are recreational, whereas I'm trying to draw comparisons to commercial software projects. For the first half of the project, no good similarities jumped at out me. But as the project started to take much longer than I expected, I began to see more and more similarities between my estimates on the fort and problems people run into with software estimates.</p>
<span>Original Work Plan</span><p>Here was the work plan I had carried around in my head for a few weeks before I started the project:</p>
<p><strong>Day 1: </strong>Dig holes for footings, pour concrete for footings, haul building materials from my driveway up the hill to the fort. <br /><strong>Day 2:</strong> Cut posts and beams to length. This was planned as a half day because I didn't want to put too much stress on the concrete footings until Day 3. <br /><strong>Day 3: </strong>Finish beams and joists and install decking; do some of the deck railings as time permits<br /><strong>Day 4:</strong> Complete the fort's framing, minus the roof<br /><strong>Day 5:</strong> Frame and install the roof<br /><strong>Day 6:</strong> Install door and windows; finish deck railing; install trim boards<br /><strong>Part time the next couple of weeks: </strong>Finish loose ends</p>
<p>I won't go through all the errors in my estimates for the whole project, but let's take a look at what I really did on Day 1.</p>
<span>DAY 1</span><p><strong>Task 1.1 Clear brush from site </strong>(~1 hour). I'd known that I had a little brush still to clear, but I thought it would take me about 10-15 minutes. Once I started looking at where I needed to put the footings, I found that I really couldn't put them where I'd planned because I would be inside the setback for the property. So I needed to move the fort back about 5 feet, and that meant clearing a bunch of brush including scrubby trees that I hadn't planned to clear.</p>
<p><strong>1.2 Survey the site and determine placement of footings</strong> (~3 hours). I'd originally planned to build the deck with 2 beams and 2 posts per beam. After looking at some span tables, I concluded that I could *probably* get away with 2 beams with 2 posts each, but what I was building was right on the border between 2 and 3 beams and between 2 and 3 posts per beam. I decided to err on the side of caution, and that meant I needed 9 footings instead of 4. Meanwhile, I had never really adjusted my time expectations to digging 9 holes instead of 4. Siting the 9 holes also turned out to be an issue because of a big stump in the middle of my area.</p>
<div class="blogImgFrame"><a href="/uploadedimages/2007-07-084.jpg"><img title="Stump" alt="Stump" src="/uploadedimages/2007-07-084.jpg" width="768" /></a></div>
<p>The site overall had more of a slope than I had realized. I wanted to stake out the corners and use string to locate the position of each hole and make sure the holes were square. Due to the slope, the stakes I was using weren't tall enough on the downhill side of the site, and I spent time pounding in stakes that ended up not being tall enough, then pulling them out, hammering together makeshift taller stakes, and then pounding those in.</p>
<p>I ended up spending a lot of time moving stakes and string around trying to figure out how to get 9 holes that were (a) not blocked by roots from the stump, (b) not blocked by roots from the tree in back of the fort, (c) far enough back from the property line to meet the setback requirement, (d) square relative to each other (which was hard to determine at this stage because of the slope I was building on).</p>
<p><strong>1.3 Dig post holes</strong> (~2 hours). I had to dig 9 holes, 12" in diameter, 24" deep. This actually went quicker than I expected. I used a clamshell digger and for the holes where I didn't run into any roots it was something like 5 minutes per hole. The difficult holes were the holes where I ran into roots partway down and then had to hack them out. Some of the holes had quite a few roots.</p>
<p><strong>1.4 Haul 20 80# bags of concrete up the hill </strong>(~1.5 hours). I had originally thought I could haul the concrete up the hill using a wheelbarrow, but the hill was just too steep. So I had to hand carry each 80# bag one at a time. It was also about 80 degrees and 95% relative humidity at this point, which meant I needed to rest and drink water every couple of bags. The change from 4 holes to 9 holes also increased the number of bags I had to haul from about 10 to about 20.</p>
<div class="blogImgFrame"><a title="Hill Side" href="/uploadedimages/2007-08-115.jpg"><img title="Hillside" alt="Hillside" src="/uploadedimages/2007-08-115.jpg" width="512" height="384" /></a></div>
<p><strong>1.5 Pour 2 Footings </strong>(1.5 hours). At this point I was pretty worn out, but I also really wanted the feeling of completion from pouring at least one of the footings. So I ended up pouring 2 of the footings and calling it a day since there was no way I was going to complete all 9 of them at that point in the day.</p>
<p><strong>End of Day 1. </strong>The picture below shows how far I got at the end of Day 1.</p>
<div class="blogImgFrame"><a href="/uploadedimages/2007-07-086.jpg"><img title="End of Day 1" alt="End of Day 1" src="/uploadedimages//2007-07-086.jpg" width="768" height="576" /></a></div>
<span>What Went Wrong with My Estimate for Day 1</span><ul>
<li>I hadn't examined my planned site well enough to know what I didn't know -- i.e., my originally planned site wouldn't work and I didn't understand how much slope there was. </li>
<li>I never revised the expectations I had created while planning a 4-footing Day 1 to more appropriate expectations for a 9-footing Day 1. That one mistake affected my site layout, concrete hauling, hole digging, and concrete pouring. </li>
<li>Brush clearing just took longer than I expected, and I hadn't included it in my estimate at all. </li>
<li>Surveying the site also just took longer than I expected, and would have even without the change from 4 holes to 9. </li>
</ul>
<span>DAY 2</span><p>What I could complete on Day 2 was limited by the fact that hadn't poured all the footings on Day 1, so about all I could do on Day 2 was pour the remaining 7 footings and haul the rest of the building materials up the hill. The rest of the footing pouring went fine and took about 4 hours. Then I needed to haul the materials up from the driveway. The pile of stuff didn't look all that intimidating:</p>
<div class="blogImgFrame "><a href="/uploadedimages/2007-07-078.jpg"><img title="Fort Materials" alt="Fort Materials" src="/uploadedimages/2007-07-078.jpg" width="768" height="576" /></a></div>
<p>Superficial appearances aside, however, there are 10 16' 2x8 pressure treated joists in that pile, and those suckers are heavy. There are also 3 12' 4x8 pressure treated beams in that pile, and those suckers are *really* heavy! And then there were 70 2x4s and 50 lengths of 5/4" decking, and 100 2x2 balusters for the railing, and about 15 sheets of plywood, and 2 bundles of roofing shingles, and a lot of other stuff, and all this stuff just starts to add up after awhile. It took me at least 50 trips up the hill, and that ended up taking me about 3 hours.</p>
<p>At the end of Day 2 I was about where I thought I'd be at the end of Day 1 after 2 pretty long days. For the record, here's what I had done at the end of Day 2:</p>
<div class="blogImgFrame"><a href="/uploadedimages/2007-07-089.jpg"><img title="End of Day 2" alt="End of Day 2" src="/uploadedimages/2007-07-089.jpg" width="768" height="576" /></a></div>
<span>What Went Wrong with the rest of My Estimate for Day 1 (i.e., the Work I Did on Day 2)</span><ul>
<li>Hauling the building materials up the hill took longer than I planned, mostly because I'd never bothered to break down the "hauling" task and realize that it was going to take 50 trips, not 10. </li>
</ul>
<span>DAYS 3-6</span><p>Days 3-6 went about like Days 1 &amp; 2 had gone, which is to say there were lots of little tasks that turned out to be medium-sized tasks, there were little tasks that I just hadn't anticipated, and most things took longer than I had planned. By the end of Day 7 (my buffer day), I was done with the tasks I had planned for Day 3 and had a tiny start on Day 4, which is to say that I'd completed the decking, hadn't started on the railings or framing, and had one wall of the fort framed, but that was all.</p>
<span>DAYS 7 AND FOLLOWING</span><p>Since I'd used up my planned full-time days on Day 7, the rest of the fort had to be completed after work, so I had to work on it only a few hours at a time, and I couldn't work on it every day. So my calendar time overrun started stretching out faster than my effort overrun did.</p>
<span>OVERALL RESULTS</span><p>My original plan had called for about a week full time and then another couple of weeks of finishing up loose ends like painting, installing trim, and so on. I finished the fort about 6 weeks after I started it, so I was about 100% over my planned schedule, and I ended up at 2-3x my originally planned effort. Here are some pictures of how it turned out:</p>
<div class="blogThumbFrame clearfix"><a href="/uploadedimages/2007-09-061.jpg"><img src="/uploadedimages/2007-09-061.jpg" width="246" height="502" /></a>  <a href="/uploadedimages/2007-09-066.jpg"><img src="/uploadedimages/2007-09-066.jpg" width="255" height="260" /></a> <a href="/uploadedimages/2007-09-073.jpg"><img src="/uploadedimages/2007-09-073.jpg" width="254" height="363" /></a></div>
<span>ESTIMATION LESSONS LEARNED</span><p>I mentioned some of the specific estimation mistakes on Days 1 &amp; 2. As I got into the project I noticed several other issues that the estimates I'd made for my project had in common with errors in software estimates that we see with our clients:</p>
<p><strong>1. Numerous unplanned problems collectively added up.</strong> Here are a few of the problems I ran into:</p>
<ul>
<li>When I opened my chainsaw case, which I hadn't used in a couple of years, I found that the oil plug hadn't been screwed in tightly, and the chainsaw was covered in oil, as was the case itself. I had to clean up the chainsaw and then dispose of the oil. </li>
<li>When I was digging the holes for the footings, I chopped my layout strings a couple times with the post hole digger. I had to spend time repairing the strings and making sure that everything was still square. </li>
<li>I dropped a little piece of my laser level down the side of one of the footing holes, between the concrete form and the dirt, after I'd poured the concrete. The piece was perched about 8" into the hole, just where I couldn't reach it. If I touched it wrong, it would drop the full 24" to the bottom of the hole where there was no way I could retrieve it. So it took me awhile to figure out how to get the piece out without risking losing it altogether. </li>
<li>My jigsaw has a little compartment/drawer to put spare blades in, and it kept coming loose and spilling blades on the ground. I looked at it several times before the sunlight finally hit the opening just right to see that there was a blade stuck under the slot where the drawer is supposed to slide in that was preventing the drawer from going in quite right. Getting that blade out took about half an hour. </li>
<li>I had trouble stabilizing the deck-railing posts by the "drawbridge" (gate). In hindsight, I should have used double joists on that side of the deck, but I didn't. I spent a lot of time staring at the these two posts, wiggling them, adding blocking to the joists below, and so on. </li>
<li>There was no adequate footing for a ladder on the backside of the fort, and there was a big tree that made putting a ladder in awkward. Tasks that took 10 minutes on the front of the fort (like nailing up a facia board under the roof) took an hour on the back. </li>
</ul>
<p>I think these problems are EXACTLY like the kinds of problems that show up unexpectedly on software projects -- two new tools you buy don't interface with each other like they're supposed to, and you have to figure out why. Or you install a new tool and suddenly your code doesn't compile anymore. Or you have a module that keeps producing errors because the design isn't quite right; you think you can't justify completely redesigning and rewriting it, but you end up nickling and dimeing your way to a higher cost than if you had bitten the bullet and redesigned and rewritten it.</p>
<p><strong>2. Underestimation of unfamiliar tasks. </strong>My estimates weren't too far off for a lot of the work that I'd done before. But some things, like mapping out the site for the footing holes, I assumed would be 15-30 minute task ended up taking several hours.</p>
<p><strong>3. Not decomposing big tasks into smaller subtasks. </strong>I'd planned out my project in whole days. At a birds eye view nothing seems obviously wrong with planning "frame the fort in one day." But when you break it down and say, What's involved in each of the 4 walls, and then you realize that one wall includes a door, another wall includes an angled top plate, a third wall includes an angled top plate and a window, and so on, and then you think about what's involved in each one (measuring, cutting, checking for square, recutting anything that wasn't quite right, tilting the wall up, checking again for plumb and square, attaching and then removing the temporary supports, etc.), you start thinking, can I really do a whole wall in 2 hours? If the answer's even close to "no," then you start to realize that the whole estimate for that big task is probably wrong.</p>
<p><strong>3. Using overly round time units.  </strong>Using round units like "1 day" contributes to not thinking hard enough about decomposing large tasks into smaller tasks.</p>
<p><strong>4. Substituting a target for an estimate.</strong> I had 7 days to do the project, and my estimate turned out to be 7 days. That's a little suspicious, and I should have known better than to make that particular mistake!</p>
<p><strong>5. Sweeping numerous little tasks under the estimation rug. </strong>There were lots of tasks that I knew needed to be done, but I didn't want to admit that they were going to affect my schedule, and so I tried not to think about them or I minimized their impact (I realized this later). These tasks ranged from clearing brush from the site, to painting the trim, to installing the door knob. I think this is strictly similar to software estimates in which people just don't want to acknowledge that data conversion takes time, installing new tools takes time, converging each release takes time, etc., etc.</p>
<p><strong>6. Never creating a <em>real</em> estimate.</strong> The fact of the matter is that I carried around a rough plan in my head for weeks, but I never actually committed a schedule to paper, and I never even considered creating a detailed estimate for the project. Of course the likelihood of making the other estimation mistakes I mentioned is higher when you don't officially create an estimate!</p>
<p><strong>7. All's Well That Ends Well. </strong>My kids love their fort, and I had a great time building it. "All's well that ends well" is one reason that companies don't improve their software practices more often than they do. If people like the software that the team produced, and the software is successful, then that reduces the incentive to try to do better next time.</p>
<p><strong>Some Differences</strong></p>
<p>There were a few differences between my fort experience and a typical commercial software project:</p>
<ul>
<li><strong>There was no way I was going to compromise quality for the sake of schedule. </strong>I couldn't build something that would be hazardous to my kids or their friends. So the schedule was going to be whatever it was going to be -- it was clearly a secondary priority. We don't see many companies in which quality trumps schedule to the degree it did on this project. </li>
<li><strong>My schedule overrun was free. </strong>My extra time was essentially free on this project -- maybe even a bonus since I was enjoying the project. So my overrun didn't imply a cost penalty. On a software project, you'd be paying for extra staff time, and so you'd have a cost overrun along with the schedule overrun. </li>
<li><strong>The estimation error didn't really matter, because I was going to do the project regardless of what the estimate turned out to be.</strong> If I had created a real estimate and had learned that the project was going to take 100 hours instead of 50 hours, I would still have done the project. It's much harder to justify not estimating and then flying blind for a business project, especially in the era of Sarbanes Oxley. </li>
</ul>
<span>What Do You Think? </span><p>Are there other lessons I should have learned? Are these the right lessons? Are these parallels between fort building and software estimation valid at all? Let me know what you think.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-09-23T10:31:00Z</dc:date>
  <content:encoded><![CDATA[<span>Also Known as: How I Spent My Summer Vacation</span><p>My big project this summer was building a fort for my kids. I'd wanted to build a clubhouse or treehouse or fort or something for the past few years, but we didn't have a good place to put it. Then while clearing some blackberries in the spring I discovered that our property extended about 20 feet further into the adjacent overgrown area than I had thought, and that was the perfect place for a fort.</p>
<p>Whenever I do a physical construction project like this I try to pay attention to which attributes of the project are similar to software projects and which are different. The comparisons are made more challenging by the fact that my construction projects are recreational, whereas I'm trying to draw comparisons to commercial software projects. For the first half of the project, no good similarities jumped at out me. But as the project started to take much longer than I expected, I began to see more and more similarities between my estimates on the fort and problems people run into with software estimates.</p>
<span>Original Work Plan</span><p>Here was the work plan I had carried around in my head for a few weeks before I started the project:</p>
<p><strong>Day 1: </strong>Dig holes for footings, pour concrete for footings, haul building materials from my driveway up the hill to the fort. <br /><strong>Day 2:</strong> Cut posts and beams to length. This was planned as a half day because I didn't want to put too much stress on the concrete footings until Day 3. <br /><strong>Day 3: </strong>Finish beams and joists and install decking; do some of the deck railings as time permits<br /><strong>Day 4:</strong> Complete the fort's framing, minus the roof<br /><strong>Day 5:</strong> Frame and install the roof<br /><strong>Day 6:</strong> Install door and windows; finish deck railing; install trim boards<br /><strong>Part time the next couple of weeks: </strong>Finish loose ends</p>
<p>I won't go through all the errors in my estimates for the whole project, but let's take a look at what I really did on Day 1.</p>
<span>DAY 1</span><p><strong>Task 1.1 Clear brush from site </strong>(~1 hour). I'd known that I had a little brush still to clear, but I thought it would take me about 10-15 minutes. Once I started looking at where I needed to put the footings, I found that I really couldn't put them where I'd planned because I would be inside the setback for the property. So I needed to move the fort back about 5 feet, and that meant clearing a bunch of brush including scrubby trees that I hadn't planned to clear.</p>
<p><strong>1.2 Survey the site and determine placement of footings</strong> (~3 hours). I'd originally planned to build the deck with 2 beams and 2 posts per beam. After looking at some span tables, I concluded that I could *probably* get away with 2 beams with 2 posts each, but what I was building was right on the border between 2 and 3 beams and between 2 and 3 posts per beam. I decided to err on the side of caution, and that meant I needed 9 footings instead of 4. Meanwhile, I had never really adjusted my time expectations to digging 9 holes instead of 4. Siting the 9 holes also turned out to be an issue because of a big stump in the middle of my area.</p>
<div class="blogImgFrame"><a href="https://www.construx.com/uploadedimages/2007-07-084.jpg"><img title="Stump" alt="Stump" src="https://www.construx.com/uploadedimages/2007-07-084.jpg" width="768" /></a></div>
<p>The site overall had more of a slope than I had realized. I wanted to stake out the corners and use string to locate the position of each hole and make sure the holes were square. Due to the slope, the stakes I was using weren't tall enough on the downhill side of the site, and I spent time pounding in stakes that ended up not being tall enough, then pulling them out, hammering together makeshift taller stakes, and then pounding those in.</p>
<p>I ended up spending a lot of time moving stakes and string around trying to figure out how to get 9 holes that were (a) not blocked by roots from the stump, (b) not blocked by roots from the tree in back of the fort, (c) far enough back from the property line to meet the setback requirement, (d) square relative to each other (which was hard to determine at this stage because of the slope I was building on).</p>
<p><strong>1.3 Dig post holes</strong> (~2 hours). I had to dig 9 holes, 12" in diameter, 24" deep. This actually went quicker than I expected. I used a clamshell digger and for the holes where I didn't run into any roots it was something like 5 minutes per hole. The difficult holes were the holes where I ran into roots partway down and then had to hack them out. Some of the holes had quite a few roots.</p>
<p><strong>1.4 Haul 20 80# bags of concrete up the hill </strong>(~1.5 hours). I had originally thought I could haul the concrete up the hill using a wheelbarrow, but the hill was just too steep. So I had to hand carry each 80# bag one at a time. It was also about 80 degrees and 95% relative humidity at this point, which meant I needed to rest and drink water every couple of bags. The change from 4 holes to 9 holes also increased the number of bags I had to haul from about 10 to about 20.</p>
<div class="blogImgFrame"><a title="Hill Side" href="https://www.construx.com/uploadedimages/2007-08-115.jpg"><img title="Hillside" alt="Hillside" src="https://www.construx.com/uploadedimages/2007-08-115.jpg" width="512" height="384" /></a></div>
<p><strong>1.5 Pour 2 Footings </strong>(1.5 hours). At this point I was pretty worn out, but I also really wanted the feeling of completion from pouring at least one of the footings. So I ended up pouring 2 of the footings and calling it a day since there was no way I was going to complete all 9 of them at that point in the day.</p>
<p><strong>End of Day 1. </strong>The picture below shows how far I got at the end of Day 1.</p>
<div class="blogImgFrame"><a href="https://www.construx.com/uploadedimages/2007-07-086.jpg"><img title="End of Day 1" alt="End of Day 1" src="https://www.construx.com/uploadedimages//2007-07-086.jpg" width="768" height="576" /></a></div>
<span>What Went Wrong with My Estimate for Day 1</span><ul>
<li>I hadn't examined my planned site well enough to know what I didn't know -- i.e., my originally planned site wouldn't work and I didn't understand how much slope there was. </li>
<li>I never revised the expectations I had created while planning a 4-footing Day 1 to more appropriate expectations for a 9-footing Day 1. That one mistake affected my site layout, concrete hauling, hole digging, and concrete pouring. </li>
<li>Brush clearing just took longer than I expected, and I hadn't included it in my estimate at all. </li>
<li>Surveying the site also just took longer than I expected, and would have even without the change from 4 holes to 9. </li>
</ul>
<span>DAY 2</span><p>What I could complete on Day 2 was limited by the fact that hadn't poured all the footings on Day 1, so about all I could do on Day 2 was pour the remaining 7 footings and haul the rest of the building materials up the hill. The rest of the footing pouring went fine and took about 4 hours. Then I needed to haul the materials up from the driveway. The pile of stuff didn't look all that intimidating:</p>
<div class="blogImgFrame "><a href="https://www.construx.com/uploadedimages/2007-07-078.jpg"><img title="Fort Materials" alt="Fort Materials" src="https://www.construx.com/uploadedimages/2007-07-078.jpg" width="768" height="576" /></a></div>
<p>Superficial appearances aside, however, there are 10 16' 2x8 pressure treated joists in that pile, and those suckers are heavy. There are also 3 12' 4x8 pressure treated beams in that pile, and those suckers are *really* heavy! And then there were 70 2x4s and 50 lengths of 5/4" decking, and 100 2x2 balusters for the railing, and about 15 sheets of plywood, and 2 bundles of roofing shingles, and a lot of other stuff, and all this stuff just starts to add up after awhile. It took me at least 50 trips up the hill, and that ended up taking me about 3 hours.</p>
<p>At the end of Day 2 I was about where I thought I'd be at the end of Day 1 after 2 pretty long days. For the record, here's what I had done at the end of Day 2:</p>
<div class="blogImgFrame"><a href="https://www.construx.com/uploadedimages/2007-07-089.jpg"><img title="End of Day 2" alt="End of Day 2" src="https://www.construx.com/uploadedimages/2007-07-089.jpg" width="768" height="576" /></a></div>
<span>What Went Wrong with the rest of My Estimate for Day 1 (i.e., the Work I Did on Day 2)</span><ul>
<li>Hauling the building materials up the hill took longer than I planned, mostly because I'd never bothered to break down the "hauling" task and realize that it was going to take 50 trips, not 10. </li>
</ul>
<span>DAYS 3-6</span><p>Days 3-6 went about like Days 1 &amp; 2 had gone, which is to say there were lots of little tasks that turned out to be medium-sized tasks, there were little tasks that I just hadn't anticipated, and most things took longer than I had planned. By the end of Day 7 (my buffer day), I was done with the tasks I had planned for Day 3 and had a tiny start on Day 4, which is to say that I'd completed the decking, hadn't started on the railings or framing, and had one wall of the fort framed, but that was all.</p>
<span>DAYS 7 AND FOLLOWING</span><p>Since I'd used up my planned full-time days on Day 7, the rest of the fort had to be completed after work, so I had to work on it only a few hours at a time, and I couldn't work on it every day. So my calendar time overrun started stretching out faster than my effort overrun did.</p>
<span>OVERALL RESULTS</span><p>My original plan had called for about a week full time and then another couple of weeks of finishing up loose ends like painting, installing trim, and so on. I finished the fort about 6 weeks after I started it, so I was about 100% over my planned schedule, and I ended up at 2-3x my originally planned effort. Here are some pictures of how it turned out:</p>
<div class="blogThumbFrame clearfix"><a href="https://www.construx.com/uploadedimages/2007-09-061.jpg"><img src="https://www.construx.com/uploadedimages/2007-09-061.jpg" width="246" height="502" /></a>  <a href="https://www.construx.com/uploadedimages/2007-09-066.jpg"><img src="https://www.construx.com/uploadedimages/2007-09-066.jpg" width="255" height="260" /></a> <a href="https://www.construx.com/uploadedimages/2007-09-073.jpg"><img src="https://www.construx.com/uploadedimages/2007-09-073.jpg" width="254" height="363" /></a></div>
<span>ESTIMATION LESSONS LEARNED</span><p>I mentioned some of the specific estimation mistakes on Days 1 &amp; 2. As I got into the project I noticed several other issues that the estimates I'd made for my project had in common with errors in software estimates that we see with our clients:</p>
<p><strong>1. Numerous unplanned problems collectively added up.</strong> Here are a few of the problems I ran into:</p>
<ul>
<li>When I opened my chainsaw case, which I hadn't used in a couple of years, I found that the oil plug hadn't been screwed in tightly, and the chainsaw was covered in oil, as was the case itself. I had to clean up the chainsaw and then dispose of the oil. </li>
<li>When I was digging the holes for the footings, I chopped my layout strings a couple times with the post hole digger. I had to spend time repairing the strings and making sure that everything was still square. </li>
<li>I dropped a little piece of my laser level down the side of one of the footing holes, between the concrete form and the dirt, after I'd poured the concrete. The piece was perched about 8" into the hole, just where I couldn't reach it. If I touched it wrong, it would drop the full 24" to the bottom of the hole where there was no way I could retrieve it. So it took me awhile to figure out how to get the piece out without risking losing it altogether. </li>
<li>My jigsaw has a little compartment/drawer to put spare blades in, and it kept coming loose and spilling blades on the ground. I looked at it several times before the sunlight finally hit the opening just right to see that there was a blade stuck under the slot where the drawer is supposed to slide in that was preventing the drawer from going in quite right. Getting that blade out took about half an hour. </li>
<li>I had trouble stabilizing the deck-railing posts by the "drawbridge" (gate). In hindsight, I should have used double joists on that side of the deck, but I didn't. I spent a lot of time staring at the these two posts, wiggling them, adding blocking to the joists below, and so on. </li>
<li>There was no adequate footing for a ladder on the backside of the fort, and there was a big tree that made putting a ladder in awkward. Tasks that took 10 minutes on the front of the fort (like nailing up a facia board under the roof) took an hour on the back. </li>
</ul>
<p>I think these problems are EXACTLY like the kinds of problems that show up unexpectedly on software projects -- two new tools you buy don't interface with each other like they're supposed to, and you have to figure out why. Or you install a new tool and suddenly your code doesn't compile anymore. Or you have a module that keeps producing errors because the design isn't quite right; you think you can't justify completely redesigning and rewriting it, but you end up nickling and dimeing your way to a higher cost than if you had bitten the bullet and redesigned and rewritten it.</p>
<p><strong>2. Underestimation of unfamiliar tasks. </strong>My estimates weren't too far off for a lot of the work that I'd done before. But some things, like mapping out the site for the footing holes, I assumed would be 15-30 minute task ended up taking several hours.</p>
<p><strong>3. Not decomposing big tasks into smaller subtasks. </strong>I'd planned out my project in whole days. At a birds eye view nothing seems obviously wrong with planning "frame the fort in one day." But when you break it down and say, What's involved in each of the 4 walls, and then you realize that one wall includes a door, another wall includes an angled top plate, a third wall includes an angled top plate and a window, and so on, and then you think about what's involved in each one (measuring, cutting, checking for square, recutting anything that wasn't quite right, tilting the wall up, checking again for plumb and square, attaching and then removing the temporary supports, etc.), you start thinking, can I really do a whole wall in 2 hours? If the answer's even close to "no," then you start to realize that the whole estimate for that big task is probably wrong.</p>
<p><strong>3. Using overly round time units.  </strong>Using round units like "1 day" contributes to not thinking hard enough about decomposing large tasks into smaller tasks.</p>
<p><strong>4. Substituting a target for an estimate.</strong> I had 7 days to do the project, and my estimate turned out to be 7 days. That's a little suspicious, and I should have known better than to make that particular mistake!</p>
<p><strong>5. Sweeping numerous little tasks under the estimation rug. </strong>There were lots of tasks that I knew needed to be done, but I didn't want to admit that they were going to affect my schedule, and so I tried not to think about them or I minimized their impact (I realized this later). These tasks ranged from clearing brush from the site, to painting the trim, to installing the door knob. I think this is strictly similar to software estimates in which people just don't want to acknowledge that data conversion takes time, installing new tools takes time, converging each release takes time, etc., etc.</p>
<p><strong>6. Never creating a <em>real</em> estimate.</strong> The fact of the matter is that I carried around a rough plan in my head for weeks, but I never actually committed a schedule to paper, and I never even considered creating a detailed estimate for the project. Of course the likelihood of making the other estimation mistakes I mentioned is higher when you don't officially create an estimate!</p>
<p><strong>7. All's Well That Ends Well. </strong>My kids love their fort, and I had a great time building it. "All's well that ends well" is one reason that companies don't improve their software practices more often than they do. If people like the software that the team produced, and the software is successful, then that reduces the incentive to try to do better next time.</p>
<p><strong>Some Differences</strong></p>
<p>There were a few differences between my fort experience and a typical commercial software project:</p>
<ul>
<li><strong>There was no way I was going to compromise quality for the sake of schedule. </strong>I couldn't build something that would be hazardous to my kids or their friends. So the schedule was going to be whatever it was going to be -- it was clearly a secondary priority. We don't see many companies in which quality trumps schedule to the degree it did on this project. </li>
<li><strong>My schedule overrun was free. </strong>My extra time was essentially free on this project -- maybe even a bonus since I was enjoying the project. So my overrun didn't imply a cost penalty. On a software project, you'd be paying for extra staff time, and so you'd have a cost overrun along with the schedule overrun. </li>
<li><strong>The estimation error didn't really matter, because I was going to do the project regardless of what the estimate turned out to be.</strong> If I had created a real estimate and had learned that the project was going to take 100 hours instead of 50 hours, I would still have done the project. It's much harder to justify not estimating and then flying blind for a business project, especially in the era of Sarbanes Oxley. </li>
</ul>
<span>What Do You Think? </span><p>Are there other lessons I should have learned? Are these the right lessons? Are these parallels between fort building and software estimation valid at all? Let me know what you think.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Industry_Benchmarks_About_Hours_Worked_Per_Week/?blogid=23485">
  <title>Industry Benchmarks About Hours Worked Per Week</title>
  <link>https://www.construx.com/10x_Software_Development/Industry_Benchmarks_About_Hours_Worked_Per_Week/?blogid=23485</link>
  <description><![CDATA[<p>One of my readers asked the following very reasonable question: </p>
<p>We are looking for industry benchmarks detailing the amount of time developers spend on a percentage basis in the following three categories:</p>
<p>1) Core job activities (writing, testing, deploying code, etc.)<br />2) Meetings<br />3) Administrative activities (training, reporting, etc.)</p>
<p>The questions are reasonable. Unfortunately, one of the lessons I've learned after looking at lots of data on questions like this is that sometimes reasonable questions don't have reasonable answers!</p>
<p>In this case, what I would call "project focused hours" per month can easily vary by a factor of two between different companies based on factors like how much time is spent in meetings, how long the work days are (think government job vs. internet startup), number of holidays, number of training days, number of non-project meetings, level of support required for software already in production, etc. A common "big company" planning number is 6 hours of project-focused work per day, for the days that the employee is actually at work, but that can vary a lot across big companies and even within big companies. Based on what we see in our consulting practice, I think it's rare to average 6 hours per day of truly project-focused work in a non-startup company. The most common distraction from project-focused work we see is time spent supporting prior releases that are in production.</p>
<p>The number of meetings varies a lot too and is significantly affected by company culture. When I was at Microsoft in 1990-91 I probably spent less than 5 hours a week in meetings. In contrast, I had a former Microsoft employee tell me earlier this year that on the team he was on he was booked in meetings from 10:00-4:00 5 days a week. Lots of managers at other companies have told me that they're in meetings all day every day and get most of their "real work" done during evenings and weekends, so obviously there's a big difference between Microsoft 1990 and Microsoft 2007, and among different companies.</p>
<p>The amount of training, reporting, etc. varies just as much--it varies even more on a percentage basis. Best in class companies typically devote 8-12 days per year to training, whereas many companies we see allow technical staff to take 1 class per year. Many of the companies we see don't systematically support <em>any</em> training days per year. </p>
<p>Bottom line is that there's just too much variation among companies to make meaningful statements about "benchmark" allocations to work and overhead time categories. That doesn't mean that you won't find published sources that claim to be benchmarks, but if you do those sources are usually limited by the fact that the authors haven't had exposure to enough companies to realize that there's as much variation as there is.</p>
<span>Resources</span><ul>
<li><em><a href="http://www.stevemcconnell.com/est.htm">Software Estimation: Demystifying the Black Art</a> </em>- my book on software estimation<em>. </em>Chapter 21 goes into detail on questions like this one, although it doesn't provide any more specific answer to this specific question than this blog posting does. </li>
<li><a href="/Seminars/?fs=1">Software Estimation in Depth</a> - a Construx seminar that addresses topics like this one. I teach this class from time to time. </li>
<li><a href="/Seminars/?fs=2">Software Measurement in Depth</a> - another Construx seminar that addresses topics like this one</li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-09-10T08:44:00Z</dc:date>
  <content:encoded><![CDATA[<p>One of my readers asked the following very reasonable question: </p>
<p>We are looking for industry benchmarks detailing the amount of time developers spend on a percentage basis in the following three categories:</p>
<p>1) Core job activities (writing, testing, deploying code, etc.)<br />2) Meetings<br />3) Administrative activities (training, reporting, etc.)</p>
<p>The questions are reasonable. Unfortunately, one of the lessons I've learned after looking at lots of data on questions like this is that sometimes reasonable questions don't have reasonable answers!</p>
<p>In this case, what I would call "project focused hours" per month can easily vary by a factor of two between different companies based on factors like how much time is spent in meetings, how long the work days are (think government job vs. internet startup), number of holidays, number of training days, number of non-project meetings, level of support required for software already in production, etc. A common "big company" planning number is 6 hours of project-focused work per day, for the days that the employee is actually at work, but that can vary a lot across big companies and even within big companies. Based on what we see in our consulting practice, I think it's rare to average 6 hours per day of truly project-focused work in a non-startup company. The most common distraction from project-focused work we see is time spent supporting prior releases that are in production.</p>
<p>The number of meetings varies a lot too and is significantly affected by company culture. When I was at Microsoft in 1990-91 I probably spent less than 5 hours a week in meetings. In contrast, I had a former Microsoft employee tell me earlier this year that on the team he was on he was booked in meetings from 10:00-4:00 5 days a week. Lots of managers at other companies have told me that they're in meetings all day every day and get most of their "real work" done during evenings and weekends, so obviously there's a big difference between Microsoft 1990 and Microsoft 2007, and among different companies.</p>
<p>The amount of training, reporting, etc. varies just as much--it varies even more on a percentage basis. Best in class companies typically devote 8-12 days per year to training, whereas many companies we see allow technical staff to take 1 class per year. Many of the companies we see don't systematically support <em>any</em> training days per year. </p>
<p>Bottom line is that there's just too much variation among companies to make meaningful statements about "benchmark" allocations to work and overhead time categories. That doesn't mean that you won't find published sources that claim to be benchmarks, but if you do those sources are usually limited by the fact that the authors haven't had exposure to enough companies to realize that there's as much variation as there is.</p>
<span>Resources</span><ul>
<li><em><a href="http://www.stevemcconnell.com/est.htm">Software Estimation: Demystifying the Black Art</a> </em>- my book on software estimation<em>. </em>Chapter 21 goes into detail on questions like this one, although it doesn't provide any more specific answer to this specific question than this blog posting does. </li>
<li><a href="https://www.construx.com/Seminars/?fs=1">Software Estimation in Depth</a> - a Construx seminar that addresses topics like this one. I teach this class from time to time. </li>
<li><a href="https://www.construx.com/Seminars/?fs=2">Software Measurement in Depth</a> - another Construx seminar that addresses topics like this one</li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/How_to_Self-Study_for_a_Computer_Programming_Job/?blogid=23485">
  <title>How to Self-Study for a Computer Programming Job</title>
  <link>https://www.construx.com/10x_Software_Development/How_to_Self-Study_for_a_Computer_Programming_Job/?blogid=23485</link>
  <description><![CDATA[<p>Readers will sometimes ask me, "I don't have a college degree in computer science. How can I study for a computer programming job?" Both my company in general and I personally have put a lot of work into answering that particular question over the past 10 years. The specific answer is based on a few questions that each individual must first answer for himself or herself:</p>
<p>1. Do you want to go back to school, or do you want to self study?<br />2. Are you more interested in doing software development or in studying computer science?</p>
<p><strong>If you're able/willing to go back to school ... </strong></p>
<p>If you are interested in computer science (the study of computers--more research oriented), then you could look at <a title="http://www.acm.org/education/curricula.html" href="http://www.acm.org/education/curricula.html">http://www.acm.org/education/curricula.html</a>, which gives recommendations for how universities should teach computer science. You might have to look through a few documents to find exactly what you are looking for. You could also look at university programs and see what progression of classes they recommend. This hasn't been my area of professional focus, so I can't offer any more on this point.</p>
<p>If you are more interested in becoming a software developer yourself, I suggest that you look at the recommended software engineering curriculum guidelines (as opposed to computer science curriculum guidelines), here: <a title="http://sites.computer.org/ccse/#_Release_of_SE2004" href="http://sites.computer.org/ccse/#_Release_of_SE2004">http://sites.computer.org/ccse/#_Release_of_SE2004</a>. In this area, too, you could look at university programs and see the progression of classes they recommend. My company maintains a list of accredited software engineering programs here: <a title="http://www.construx.com/Resources/Undergraduate_Programs/" href="/Resources/Undergraduate_Programs/">http://www.construx.com/Resources/Undergraduate_Programs/ </a>. </p>
<p><strong>If you're not interested in going back to school and want to self study</strong> , the recommendations are different. This is what most people who contact me are asking about, which is not surprising considering that only about 40% of people working as programmers originally got a CS degree or equivalent, and only about 60% of people working as programmers ever got a computer-related degree.</p>
<p>My company has put together several sample professional development plans (PDPs). Each of these plans describes a progression of work experience, reading, and classes that a person should take to achieve what we call "competency" and "leadership" levels in software development, testing, or project management. We originally developed these plans about 10 years ago for Construx's internal use.</p>
<ul>
<li><p><a href="/Resources/Developer_Professional_Development_Plan/">Programmer's PDP</a>  (you'll have to log in to our website to view these)</p>
</li>
<li><p><a href="/Resources/Tester_Professional_Development_Plan/">Tester's PDP</a>  </p>
</li>
<li><p><a href="/Resources/Manager_Professional_Development_Plan/">Project Manager's PDP</a>  </p>
</li>
</ul>
<p>For example, here's an excerpt from the sample Programmer's PDP:</p>
<div><div class="resourceBlock clearfix"><div class="resourceInnerBlock clearfix"><div class="rbLeftHeader">Activity Type </div>
<div class="rbRightHeader">Details </div>
<div class="rbLeft">Work Experience </div>
<div class="rbRight"><ul>
<li>Act as a developer on at least one project </li>
<li>Act as a backup <a href="/Construx_Pages/Resources/CxOne/CxOne_Basic/CxOne_Basic/">construction lead</a> on at least one project </li>
<li>Act as a backup <a href="/Construx_Pages/Resources/CxOne/CxOne_Basic/CxOne_Basic/">design lead</a> on at least one project </li>
<li>Develop unit or module level test cases for a project </li>
<li>Write one or more designs </li>
<li>Participate in the release process of a project </li>
<li>Perform personal planning and tracking on a project </li>
<li>Participate in a code review </li>
<li>Participate in a design review </li>
<li>Participate in an informal review </li>
<li>Participate in an inspection </li>
<li>Review a project's documentation including the quality plan, test plans, test cases, project plans, schedules, and work breakdown structures </li>
</ul>
</div>
<div class="rbLeft"><strong>Reading</strong>  </div>
<div class="rbRight"><ul>
<li><em>Code Complete, 2nd Ed</em>, Steve McConnell </li>
<li><em>Programming Pearls 2nd Ed</em>, Jon Bentley </li>
<li><em>Applying UML &amp; Patterns 2nd Ed</em>, Craig Larman </li>
<li><em>Conceptual Blockbusting</em>, James Adams </li>
<li><em>Software Creativity, Version 2.0</em>, Robert Glass </li>
<li><em>Rapid Development</em>, Steve McConnell </li>
<li><em>Software Project Survival Guide</em>, Steve McConnell </li>
<li><em>UML Distilled</em>, Martin Fowler et al </li>
</ul>
</div>
<div class="rbLeft"><strong>Classes</strong>  </div>
<div class="rbRight"><ul>
<li><a href="/Seminars/?dm=1">Code Complete</a>  </li>
<li><a href="/Seminars/?fs=1">Object Oriented Analysis and Design using the UML</a>  </li>
<li><a href="/Seminars/?fs=2">Peer Reviews for Higher Quality and Productivity</a></li>
</ul>
</div>
</div>
</div>
</div>
<p>This table describes the work need to get a developer to Level 10 on our PDL. (We consider Level 12 to be full professional standing). See our website for descriptions of the work needed to attain Level 11 and Level 12.</p>
<p>It's important to recognize that the PDPs on the website are <em>samples</em>. In practice, employees normally work with a mentor to define the exact details of their PDPs. Our practice allows substitution of books, classes, and experience as long as the substitions collectively are approximately equivalent to the sample. The main purpose of the sample is to provide a starting point so that an employee can create a PDP based on something more helpful than a blank piece of paper.</p>
<p>Sample plans like these are often sufficient for an individual's use. But they are not the full story. They are one of many outputs of our much more comprehensive Professional Development Ladder (PDL). You can see an overview of our PDL <a href="/Resources/Professional_Development_Ladder/">here</a>, and you can also download our <a href="/Thought_Leadership/Events/Practical_benefits_profound_results/">PDL whitepaper</a>.</p>
<p><strong>Organizational Support for Professional Development</strong></p>
<p>After a few years we found that some of our client companies were interested in providing better career pathing for their technical professionals, and it turned out that the way we had designed our PDL made it easily adaptable for other companies' use.</p>
<p>The basic idea is that we started with the SWEBOK (software engineering body of knowledge) as an organizing framework. We customized each of the SWEBOK's 10 knowledge areas into more practically focused knowledge areas that we called <a style="COLOR: blue; TEXT-DECORATION: underline" href="/Resources/CxOne/">Construx Knowledge Areas</a> (CKAs). The knowledge areas are things like requirements, design, construction, testing, and so on.</p>
<p>We then defined Capability Levels within each of the 10 CKAs. The capability levels are</p>
<ul>
<li>Introductory -- performs basic work in an area, usually under supervision </li>
<li>Competence - performs independent work in an area, largely self-supervised </li>
<li>Leadership - performs exemplary work in an area; serves as a role model for others; regularly coaches others </li>
<li>Mastery - performs reference work in an area; work has not just company visibility, but industry visibility; provides leadership both within Construx and to the industry at large </li>
</ul>
<p>Our PDL defines specific steps that a technical professional can take to achieve Introductory, Competence, and Leadership capability within each of the 10 CKAs. Consequently we end up with a matrix of 10 CKAs crossed with 3 Capability levels -- i..e, a 10x3 = 30 box matrix -- which in total has several hundred entries for work experience, reading, and classes that are needed to attain each level.</p>
<p>The 10x3 matrix structure can be easily applied to provide a simple way of defining consistent and structured career progression, including guidance for professional development and promotion criteria. For example, within Construx we've said that to attain what we call "Level 12" (also known as Professional Software Engineer status at Construx), a professional must achieve Introductory capability in all 10 CKAs, Competency level in 8 of the 10, and Leadership level in 3 of the 10.</p>
<p>Thus someone who has a development focus might go for leadership in Design, Construction, and Tools &amp; Methods. Someone who has a test focus could go for leadership in Testing, Quality, and Tools and Methods. Someone with a project management focus could go for leadership in Engineering Management, Quality, and Requirements. The cool thing about our PDL is that it provides consistency across these disciplines and level-sets the amount of work that anyone will need to do to achieve full professional status regardless of whether they choose to specialize in development, testing, management, QA, requirements, or another discipline. It's also has the advantage of being aligned with the industry-standard SWEBOK, which makes it easier for companies to create customized versions of our PDL if they choose to do that.</p>
<p><strong>Question for You</strong></p>
<p>We originally created our PDL because we had noticed that most companies provided little or no career guidance to their software professionals. I thought that software professionals deserved better and would appreciate a clearer roadmap to advance their professional capabilities and their careers.</p>
<p>What do you think? Have you been satisfied with the career guidance provided by the companies you've worked for? What guidance have they provided? Has it been enough? What's been missing. I'd love to hear your thoughts. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-08-12T09:30:00Z</dc:date>
  <content:encoded><![CDATA[<p>Readers will sometimes ask me, "I don't have a college degree in computer science. How can I study for a computer programming job?" Both my company in general and I personally have put a lot of work into answering that particular question over the past 10 years. The specific answer is based on a few questions that each individual must first answer for himself or herself:</p>
<p>1. Do you want to go back to school, or do you want to self study?<br />2. Are you more interested in doing software development or in studying computer science?</p>
<p><strong>If you're able/willing to go back to school ... </strong></p>
<p>If you are interested in computer science (the study of computers--more research oriented), then you could look at <a title="http://www.acm.org/education/curricula.html" href="http://www.acm.org/education/curricula.html">http://www.acm.org/education/curricula.html</a>, which gives recommendations for how universities should teach computer science. You might have to look through a few documents to find exactly what you are looking for. You could also look at university programs and see what progression of classes they recommend. This hasn't been my area of professional focus, so I can't offer any more on this point.</p>
<p>If you are more interested in becoming a software developer yourself, I suggest that you look at the recommended software engineering curriculum guidelines (as opposed to computer science curriculum guidelines), here: <a title="http://sites.computer.org/ccse/#_Release_of_SE2004" href="http://sites.computer.org/ccse/#_Release_of_SE2004">http://sites.computer.org/ccse/#_Release_of_SE2004</a>. In this area, too, you could look at university programs and see the progression of classes they recommend. My company maintains a list of accredited software engineering programs here: <a title="http://www.construx.com/Resources/Undergraduate_Programs/" href="https://www.construx.com/Resources/Undergraduate_Programs/">http://www.construx.com/Resources/Undergraduate_Programs/ </a>. </p>
<p><strong>If you're not interested in going back to school and want to self study</strong> , the recommendations are different. This is what most people who contact me are asking about, which is not surprising considering that only about 40% of people working as programmers originally got a CS degree or equivalent, and only about 60% of people working as programmers ever got a computer-related degree.</p>
<p>My company has put together several sample professional development plans (PDPs). Each of these plans describes a progression of work experience, reading, and classes that a person should take to achieve what we call "competency" and "leadership" levels in software development, testing, or project management. We originally developed these plans about 10 years ago for Construx's internal use.</p>
<ul>
<li><p><a href="https://www.construx.com/Resources/Developer_Professional_Development_Plan/">Programmer's PDP</a>  (you'll have to log in to our website to view these)</p>
</li>
<li><p><a href="https://www.construx.com/Resources/Tester_Professional_Development_Plan/">Tester's PDP</a>  </p>
</li>
<li><p><a href="https://www.construx.com/Resources/Manager_Professional_Development_Plan/">Project Manager's PDP</a>  </p>
</li>
</ul>
<p>For example, here's an excerpt from the sample Programmer's PDP:</p>
<div><div class="resourceBlock clearfix"><div class="resourceInnerBlock clearfix"><div class="rbLeftHeader">Activity Type </div>
<div class="rbRightHeader">Details </div>
<div class="rbLeft">Work Experience </div>
<div class="rbRight"><ul>
<li>Act as a developer on at least one project </li>
<li>Act as a backup <a href="https://www.construx.com/Construx_Pages/Resources/CxOne/CxOne_Basic/CxOne_Basic/">construction lead</a> on at least one project </li>
<li>Act as a backup <a href="https://www.construx.com/Construx_Pages/Resources/CxOne/CxOne_Basic/CxOne_Basic/">design lead</a> on at least one project </li>
<li>Develop unit or module level test cases for a project </li>
<li>Write one or more designs </li>
<li>Participate in the release process of a project </li>
<li>Perform personal planning and tracking on a project </li>
<li>Participate in a code review </li>
<li>Participate in a design review </li>
<li>Participate in an informal review </li>
<li>Participate in an inspection </li>
<li>Review a project's documentation including the quality plan, test plans, test cases, project plans, schedules, and work breakdown structures </li>
</ul>
</div>
<div class="rbLeft"><strong>Reading</strong>  </div>
<div class="rbRight"><ul>
<li><em>Code Complete, 2nd Ed</em>, Steve McConnell </li>
<li><em>Programming Pearls 2nd Ed</em>, Jon Bentley </li>
<li><em>Applying UML &amp; Patterns 2nd Ed</em>, Craig Larman </li>
<li><em>Conceptual Blockbusting</em>, James Adams </li>
<li><em>Software Creativity, Version 2.0</em>, Robert Glass </li>
<li><em>Rapid Development</em>, Steve McConnell </li>
<li><em>Software Project Survival Guide</em>, Steve McConnell </li>
<li><em>UML Distilled</em>, Martin Fowler et al </li>
</ul>
</div>
<div class="rbLeft"><strong>Classes</strong>  </div>
<div class="rbRight"><ul>
<li><a href="https://www.construx.com/Seminars/?dm=1">Code Complete</a>  </li>
<li><a href="https://www.construx.com/Seminars/?fs=1">Object Oriented Analysis and Design using the UML</a>  </li>
<li><a href="https://www.construx.com/Seminars/?fs=2">Peer Reviews for Higher Quality and Productivity</a></li>
</ul>
</div>
</div>
</div>
</div>
<p>This table describes the work need to get a developer to Level 10 on our PDL. (We consider Level 12 to be full professional standing). See our website for descriptions of the work needed to attain Level 11 and Level 12.</p>
<p>It's important to recognize that the PDPs on the website are <em>samples</em>. In practice, employees normally work with a mentor to define the exact details of their PDPs. Our practice allows substitution of books, classes, and experience as long as the substitions collectively are approximately equivalent to the sample. The main purpose of the sample is to provide a starting point so that an employee can create a PDP based on something more helpful than a blank piece of paper.</p>
<p>Sample plans like these are often sufficient for an individual's use. But they are not the full story. They are one of many outputs of our much more comprehensive Professional Development Ladder (PDL). You can see an overview of our PDL <a href="https://www.construx.com/Resources/Professional_Development_Ladder/">here</a>, and you can also download our <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">PDL whitepaper</a>.</p>
<p><strong>Organizational Support for Professional Development</strong></p>
<p>After a few years we found that some of our client companies were interested in providing better career pathing for their technical professionals, and it turned out that the way we had designed our PDL made it easily adaptable for other companies' use.</p>
<p>The basic idea is that we started with the SWEBOK (software engineering body of knowledge) as an organizing framework. We customized each of the SWEBOK's 10 knowledge areas into more practically focused knowledge areas that we called <a style="COLOR: blue; TEXT-DECORATION: underline" href="https://www.construx.com/Resources/CxOne/">Construx Knowledge Areas</a> (CKAs). The knowledge areas are things like requirements, design, construction, testing, and so on.</p>
<p>We then defined Capability Levels within each of the 10 CKAs. The capability levels are</p>
<ul>
<li>Introductory -- performs basic work in an area, usually under supervision </li>
<li>Competence - performs independent work in an area, largely self-supervised </li>
<li>Leadership - performs exemplary work in an area; serves as a role model for others; regularly coaches others </li>
<li>Mastery - performs reference work in an area; work has not just company visibility, but industry visibility; provides leadership both within Construx and to the industry at large </li>
</ul>
<p>Our PDL defines specific steps that a technical professional can take to achieve Introductory, Competence, and Leadership capability within each of the 10 CKAs. Consequently we end up with a matrix of 10 CKAs crossed with 3 Capability levels -- i..e, a 10x3 = 30 box matrix -- which in total has several hundred entries for work experience, reading, and classes that are needed to attain each level.</p>
<p>The 10x3 matrix structure can be easily applied to provide a simple way of defining consistent and structured career progression, including guidance for professional development and promotion criteria. For example, within Construx we've said that to attain what we call "Level 12" (also known as Professional Software Engineer status at Construx), a professional must achieve Introductory capability in all 10 CKAs, Competency level in 8 of the 10, and Leadership level in 3 of the 10.</p>
<p>Thus someone who has a development focus might go for leadership in Design, Construction, and Tools &amp; Methods. Someone who has a test focus could go for leadership in Testing, Quality, and Tools and Methods. Someone with a project management focus could go for leadership in Engineering Management, Quality, and Requirements. The cool thing about our PDL is that it provides consistency across these disciplines and level-sets the amount of work that anyone will need to do to achieve full professional status regardless of whether they choose to specialize in development, testing, management, QA, requirements, or another discipline. It's also has the advantage of being aligned with the industry-standard SWEBOK, which makes it easier for companies to create customized versions of our PDL if they choose to do that.</p>
<p><strong>Question for You</strong></p>
<p>We originally created our PDL because we had noticed that most companies provided little or no career guidance to their software professionals. I thought that software professionals deserved better and would appreciate a clearer roadmap to advance their professional capabilities and their careers.</p>
<p>What do you think? Have you been satisfied with the career guidance provided by the companies you've worked for? What guidance have they provided? Has it been enough? What's been missing. I'd love to hear your thoughts. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Best_Companies_to_Work_For,_Part_2/?blogid=23485">
  <title>Best Companies to Work For, Part 2</title>
  <link>https://www.construx.com/10x_Software_Development/Best_Companies_to_Work_For,_Part_2/?blogid=23485</link>
  <description><![CDATA[<span>Construx Employee Perspective</span><p>As I mentioned in an <a href="/10x_Software_Development/Best_Companies_to_Work_For,_Part_1/ ">earlier post</a>, at the end of June I was very pleased to learn that Construx Software (my company) had been recognized as the <a href="http://washingtonceo.com/news-article-display/article/178/programmers.html">Best Small Company to Work For</a> in Washington state. Getting the outside validation was gratifying, but what does the inside view look like? What do Construx's employees think makes Construx a good company to work for? We held an all company lunch discussion in July to talk about that question, and here's what people said.</p>
<p><strong>Participatory Decision Making. </strong>We don't make very many decisions behind closed doors, or at least not without getting input from some, most, or all employees. We survey employees regularly. We do a big employee satisfaction survey once a year. We survey on other issues as needed, on topics like when we should hold our holiday dinner, which new benefits employees would value more, and so on. The Washington CEO article also commented on the degree to which we involved employees during our rocky period in 2001-2002, which our employees brought up again during our lunch discussion.</p>
<p><strong>Make Your Own Job. </strong>Our technical service providers (TSPs) essentially define their own jobs within three broad parameters. First, their work needs to support our mission (Advancing the art and science of commercial software engineering). Second, they need to hit their billable revenue target (which isn't a problem since most of our TSPs beat their target at least 50%). Third, their work needs to meet our service quality targets -- we reserve the right to pull the plug on offerings that aren't delighting our clients. As long as the work they want to do meets those criteria, each TSP has a lot of latitude. A TSP can develop a new course more or less according to his/her interests. A TSP can work on a new consulting offering, spend time blogging, write a book, etc. The people who like this approach, love it. A couple people have seemed to want more direction. In any case, the decision about what to work on is made collaborative (see point #1, above), so people who want more direction get that, and people who have a strong feeling about a direction they want to pursue normally get that, too.</p>
<p>The flip side of this is that employees become highly responsible for service quality. This is fine for us as we want to hire people who seek out responsibility.</p>
<p><strong>Lack of Competitiveness/Helping Each Other. </strong>Our environment is very cooperative. TSPs help each other; sales personnel help each other; TSPs help sales staff, and sales staff helps the TSPs. We’ve worked hard to replace “us vs. them” thinking with “we” thinking, and I think that’s pretty deeply engrained in our culture at this point. We understand that we’re all in this together, and people act accordingly.</p>
<p>This can go to fairly extreme degrees, with one TSP pitching in and teaching a class in a remote city to help out another TSP.</p>
<p><strong>Flexibility. </strong>We offer a lot of flexibility in terms of hours and days worked, subject to the three criteria mentioned at the top of the post.</p>
<p><strong>Profitability. </strong>We believe strongly that we can be good to our employees and still be profitable – furthermore, that being good to our employees will actually help profitability in the long run.</p>
<p><strong>Easy going culture. </strong>There isn’t much yelling here. It’s pretty relaxed. We wear business casual clothing (even on the casual side of “business casual”), including shorts in the summer. If we have client meetings we expect people to dress appropriate for the client. In the software business, that’s usually business casual, but probably not shorts and t-shirts for most of our clients.</p>
<p><strong>Treating Employees as Humans First, Employees Second. </strong>We had a rough patch this spring during which we had 3 employees lose parents in a 60-day period. Losing a parent is a major life event, and work needs to take a back seat for awhile when that happens. Our employees were appreciative that we recognized that. I have to admit that I am surprised that people appreciate this, mostly because I simply can’t imagine a manager being so heartless as to not recognize the significance of that kind of major event.</p>
<p>At a more day-to-day level, we’re also pretty understanding of people needing to leave to pick up their kids from daycare, attend school plays, ball games, etc. Sometimes work has to take precedence, but usually work life and home life can be kept in balance.</p>
<p><strong>Overall. </strong>Our lunch discussion didn’t turn out to be a very comprehensive or systematic discussion. I think we mostly just hit points that the Washington CEO writeup missed, or that seemed underemphasized in that article.</p>
<p>I’ll write up what I think makes us a best company to work for in a future blog entry. <br /></p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-08-06T15:35:00Z</dc:date>
  <content:encoded><![CDATA[<span>Construx Employee Perspective</span><p>As I mentioned in an <a href="https://www.construx.com/10x_Software_Development/Best_Companies_to_Work_For,_Part_1/ ">earlier post</a>, at the end of June I was very pleased to learn that Construx Software (my company) had been recognized as the <a href="http://washingtonceo.com/news-article-display/article/178/programmers.html">Best Small Company to Work For</a> in Washington state. Getting the outside validation was gratifying, but what does the inside view look like? What do Construx's employees think makes Construx a good company to work for? We held an all company lunch discussion in July to talk about that question, and here's what people said.</p>
<p><strong>Participatory Decision Making. </strong>We don't make very many decisions behind closed doors, or at least not without getting input from some, most, or all employees. We survey employees regularly. We do a big employee satisfaction survey once a year. We survey on other issues as needed, on topics like when we should hold our holiday dinner, which new benefits employees would value more, and so on. The Washington CEO article also commented on the degree to which we involved employees during our rocky period in 2001-2002, which our employees brought up again during our lunch discussion.</p>
<p><strong>Make Your Own Job. </strong>Our technical service providers (TSPs) essentially define their own jobs within three broad parameters. First, their work needs to support our mission (Advancing the art and science of commercial software engineering). Second, they need to hit their billable revenue target (which isn't a problem since most of our TSPs beat their target at least 50%). Third, their work needs to meet our service quality targets -- we reserve the right to pull the plug on offerings that aren't delighting our clients. As long as the work they want to do meets those criteria, each TSP has a lot of latitude. A TSP can develop a new course more or less according to his/her interests. A TSP can work on a new consulting offering, spend time blogging, write a book, etc. The people who like this approach, love it. A couple people have seemed to want more direction. In any case, the decision about what to work on is made collaborative (see point #1, above), so people who want more direction get that, and people who have a strong feeling about a direction they want to pursue normally get that, too.</p>
<p>The flip side of this is that employees become highly responsible for service quality. This is fine for us as we want to hire people who seek out responsibility.</p>
<p><strong>Lack of Competitiveness/Helping Each Other. </strong>Our environment is very cooperative. TSPs help each other; sales personnel help each other; TSPs help sales staff, and sales staff helps the TSPs. We’ve worked hard to replace “us vs. them” thinking with “we” thinking, and I think that’s pretty deeply engrained in our culture at this point. We understand that we’re all in this together, and people act accordingly.</p>
<p>This can go to fairly extreme degrees, with one TSP pitching in and teaching a class in a remote city to help out another TSP.</p>
<p><strong>Flexibility. </strong>We offer a lot of flexibility in terms of hours and days worked, subject to the three criteria mentioned at the top of the post.</p>
<p><strong>Profitability. </strong>We believe strongly that we can be good to our employees and still be profitable – furthermore, that being good to our employees will actually help profitability in the long run.</p>
<p><strong>Easy going culture. </strong>There isn’t much yelling here. It’s pretty relaxed. We wear business casual clothing (even on the casual side of “business casual”), including shorts in the summer. If we have client meetings we expect people to dress appropriate for the client. In the software business, that’s usually business casual, but probably not shorts and t-shirts for most of our clients.</p>
<p><strong>Treating Employees as Humans First, Employees Second. </strong>We had a rough patch this spring during which we had 3 employees lose parents in a 60-day period. Losing a parent is a major life event, and work needs to take a back seat for awhile when that happens. Our employees were appreciative that we recognized that. I have to admit that I am surprised that people appreciate this, mostly because I simply can’t imagine a manager being so heartless as to not recognize the significance of that kind of major event.</p>
<p>At a more day-to-day level, we’re also pretty understanding of people needing to leave to pick up their kids from daycare, attend school plays, ball games, etc. Sometimes work has to take precedence, but usually work life and home life can be kept in balance.</p>
<p><strong>Overall. </strong>Our lunch discussion didn’t turn out to be a very comprehensive or systematic discussion. I think we mostly just hit points that the Washington CEO writeup missed, or that seemed underemphasized in that article.</p>
<p>I’ll write up what I think makes us a best company to work for in a future blog entry. <br /></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Best_Companies_to_Work_For,_Part_1/?blogid=23485">
  <title>Best Companies to Work For, Part 1</title>
  <link>https://www.construx.com/10x_Software_Development/Best_Companies_to_Work_For,_Part_1/?blogid=23485</link>
  <description><![CDATA[<p>[Warning, bragging ahead]</p>
<p>At the end of June I was very pleased to learn that Construx Software (my company) had been recognized as the <a title="Best Small Company to Work For" href="http://washingtonceo.com/news-article-display/article/178/programmers.html" target="_blank">Best Small Company to Work For</a> in Washington state. <em>Washington CEO </em>magazine published a list of the 100 Best Companies to work for. Construx topped the "Small Companies" category. With a total score of 148.87 (a total of the employee survey scores and judges' scores), Construx easily topped the winner in the "large company" category, which scored 128.69, and the medium category winner, which scored 132.64, making Construx easily the highest scoring company overall.</p>
<p>Construx has been a finalist all four times we participated in the survey since I founded the company in 1996, so it was wonderful to come out #1 in 2007. One reason I like the recognition is that it shows that my books and articles aren't just theoretical -- I'm willing to put my money where my mouth is for practices like private offices, morale events, and so on -- and the survey results show that employees respond very favorably to these practices.</p>
<p><em>Washington CEO </em>considered 10 categories, including</p>
<ul>
<li>Communication</li>
<li>Training &amp; Education</li>
<li>Responsibility &amp; Decision Making</li>
<li>Performance Standards</li>
<li>Rewards &amp; Recognition</li>
<li>Benefits</li>
<li>Leadership</li>
<li>Work Environment</li>
<li>Hiring &amp; Retention</li>
<li>Corporate Culture</li>
</ul>
<p>For scores from the panel of 5 judges, Construx received perfect scores in all 10 categories, for a total of 50.00 points, the maximum possible. The winner in the large company category scored 41.95 from the judges, and the winner in the medium company category scored 39.94. In other words, the other category winners achieved only 80-85% of Construx's winning score.</p>
<p>It's always interesting to compare what outsiders think vs. what insiders think. In this blog posting, I'll tell you what <em>Washington CEO</em> included in their description of what makes Construx a best company. <strong>In part 2, </strong>I'll tell you what Construx's employees think. And in <strong>Part 3 </strong>I'll tell you what I think.</p>
<span>What Makes Construx the Best Company to Work For, Part 1? (<em>Washington CEO</em> magazine view)</span><p><em>Washington CEO </em>magazine mentioned numerous specific points that they felt made Construx a best company:</p>
<span>Leadership</span><p>Construx is led by a software industry guru. [that's me -- I didn't write that]. Construx's CEO avoids arrogance and defensiveness, and strives for perfection in everything. He takes time to talk and listen.</p>
<p>Construx's COO communicates openly and directly. He spends a lot of time worrying about morale. He makes sure that issues don't tend to fester.</p>
<p>When our company hit hard times during the dot com collapse (when many of our clients went out of business), Construx's management team was very open about fully disclosing all aspects of the company's financial condition with all our employees. We laid out every possible option so that employees could "walk with us" through the decisions we had to make. During that difficult time, rather than just laying off employees, we gathered input from our staff, and based on strong staff consensus, we applied across-the-board salary cuts rather than laying anyone off.</p>
<span>Benefits</span><p>Benefits are generous, including 401K with 100% match up to 10% of salary; fully paid employee health-care premiums, with dependents paid at 75%; 24 days of vacation minimum, increasing with seniority. Pay is industry average salaries, with bonuses for "those who exceed expectations" [we wouldn't word it that way, since virtually everyone receives bonuses of some kind or other]</p>
<p>Employees have lots of flexibility. They can set their own schedules, to balance their personal and professional lives [within the constraints of how they can still satisfy their clients], and employees can turn down assignments that aren't appealing to them as long as they're pulling their weight overall.</p>
<span>Culture</span><p>Construx holds weekly "wind downs," during which employees drink beer and wine, sit on sofas, and chat.</p>
<p>Construx has a "cozy, modern looking cafe" where employees can get free bottled water, soda, Gatorade, and most other kinds of bottled drinks.</p>
<p>The whole company has read <em>Built to Last</em> and discussed it. Everyone in the company can recite the company's mission: Advancing the art and science of commercial software engineering.</p>
<span>Focus on Employee Satisfaction</span><p>We explicitly make employee satisfaction a top priority. The COO's comp package actually ranks employee satisfaction above profit and revenue. We have a kegerator, a white refrigerator with beer taps and three home brewed beers on tap [the number actually varies, but that's what the article said]</p>
<p>Construx's business philosophy is "hire competent smart people and let them do their jobs." Construx expects employees to regularly develop their professional skills. Construx also supports them in getting better, by emphasizing professional development, particularly, Construx's Professional Development Ladder.</p>
<span>My Reaction</span><p>I've been interviewed enough times that I've learned that minor factual errors are to be expected. That said, I thought the <em>Washington CEO </em>article was quite accurate. We gave them 5-10 times as much content as they could describe in a short story, so they left out more than they included, and what's interesting to me are the specific points they chose to highlight.</p>
<p>Does the <em>Washington CEO </em>article really capture the reasons that our employees like working at Construx? We're having an all company meeting Friday to discuss that, and I'll write up my employees' view of what makes Construx a Best Company to Work For in a future blog entry.</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-07-10T14:25:00Z</dc:date>
  <content:encoded><![CDATA[<p>[Warning, bragging ahead]</p>
<p>At the end of June I was very pleased to learn that Construx Software (my company) had been recognized as the <a title="Best Small Company to Work For" href="http://washingtonceo.com/news-article-display/article/178/programmers.html" target="_blank">Best Small Company to Work For</a> in Washington state. <em>Washington CEO </em>magazine published a list of the 100 Best Companies to work for. Construx topped the "Small Companies" category. With a total score of 148.87 (a total of the employee survey scores and judges' scores), Construx easily topped the winner in the "large company" category, which scored 128.69, and the medium category winner, which scored 132.64, making Construx easily the highest scoring company overall.</p>
<p>Construx has been a finalist all four times we participated in the survey since I founded the company in 1996, so it was wonderful to come out #1 in 2007. One reason I like the recognition is that it shows that my books and articles aren't just theoretical -- I'm willing to put my money where my mouth is for practices like private offices, morale events, and so on -- and the survey results show that employees respond very favorably to these practices.</p>
<p><em>Washington CEO </em>considered 10 categories, including</p>
<ul>
<li>Communication</li>
<li>Training &amp; Education</li>
<li>Responsibility &amp; Decision Making</li>
<li>Performance Standards</li>
<li>Rewards &amp; Recognition</li>
<li>Benefits</li>
<li>Leadership</li>
<li>Work Environment</li>
<li>Hiring &amp; Retention</li>
<li>Corporate Culture</li>
</ul>
<p>For scores from the panel of 5 judges, Construx received perfect scores in all 10 categories, for a total of 50.00 points, the maximum possible. The winner in the large company category scored 41.95 from the judges, and the winner in the medium company category scored 39.94. In other words, the other category winners achieved only 80-85% of Construx's winning score.</p>
<p>It's always interesting to compare what outsiders think vs. what insiders think. In this blog posting, I'll tell you what <em>Washington CEO</em> included in their description of what makes Construx a best company. <strong>In part 2, </strong>I'll tell you what Construx's employees think. And in <strong>Part 3 </strong>I'll tell you what I think.</p>
<span>What Makes Construx the Best Company to Work For, Part 1? (<em>Washington CEO</em> magazine view)</span><p><em>Washington CEO </em>magazine mentioned numerous specific points that they felt made Construx a best company:</p>
<span>Leadership</span><p>Construx is led by a software industry guru. [that's me -- I didn't write that]. Construx's CEO avoids arrogance and defensiveness, and strives for perfection in everything. He takes time to talk and listen.</p>
<p>Construx's COO communicates openly and directly. He spends a lot of time worrying about morale. He makes sure that issues don't tend to fester.</p>
<p>When our company hit hard times during the dot com collapse (when many of our clients went out of business), Construx's management team was very open about fully disclosing all aspects of the company's financial condition with all our employees. We laid out every possible option so that employees could "walk with us" through the decisions we had to make. During that difficult time, rather than just laying off employees, we gathered input from our staff, and based on strong staff consensus, we applied across-the-board salary cuts rather than laying anyone off.</p>
<span>Benefits</span><p>Benefits are generous, including 401K with 100% match up to 10% of salary; fully paid employee health-care premiums, with dependents paid at 75%; 24 days of vacation minimum, increasing with seniority. Pay is industry average salaries, with bonuses for "those who exceed expectations" [we wouldn't word it that way, since virtually everyone receives bonuses of some kind or other]</p>
<p>Employees have lots of flexibility. They can set their own schedules, to balance their personal and professional lives [within the constraints of how they can still satisfy their clients], and employees can turn down assignments that aren't appealing to them as long as they're pulling their weight overall.</p>
<span>Culture</span><p>Construx holds weekly "wind downs," during which employees drink beer and wine, sit on sofas, and chat.</p>
<p>Construx has a "cozy, modern looking cafe" where employees can get free bottled water, soda, Gatorade, and most other kinds of bottled drinks.</p>
<p>The whole company has read <em>Built to Last</em> and discussed it. Everyone in the company can recite the company's mission: Advancing the art and science of commercial software engineering.</p>
<span>Focus on Employee Satisfaction</span><p>We explicitly make employee satisfaction a top priority. The COO's comp package actually ranks employee satisfaction above profit and revenue. We have a kegerator, a white refrigerator with beer taps and three home brewed beers on tap [the number actually varies, but that's what the article said]</p>
<p>Construx's business philosophy is "hire competent smart people and let them do their jobs." Construx expects employees to regularly develop their professional skills. Construx also supports them in getting better, by emphasizing professional development, particularly, Construx's Professional Development Ladder.</p>
<span>My Reaction</span><p>I've been interviewed enough times that I've learned that minor factual errors are to be expected. That said, I thought the <em>Washington CEO </em>article was quite accurate. We gave them 5-10 times as much content as they could describe in a short story, so they left out more than they included, and what's interesting to me are the specific points they chose to highlight.</p>
<p>Does the <em>Washington CEO </em>article really capture the reasons that our employees like working at Construx? We're having an all company meeting Friday to discuss that, and I'll write up my employees' view of what makes Construx a Best Company to Work For in a future blog entry.</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Rumors_of_Software_Engineerings_Death_are_Greatly_Exaggerated_(aka_Software_Engineering_Ignorance,_Part_II)/?blogid=23485">
  <title>Rumors of Software Engineerings Death are Greatly Exaggerated (aka Software Engineering Ignorance, Part II)</title>
  <link>https://www.construx.com/10x_Software_Development/Rumors_of_Software_Engineerings_Death_are_Greatly_Exaggerated_(aka_Software_Engineering_Ignorance,_Part_II)/?blogid=23485</link>
  <description><![CDATA[<p>A reader of my previous blog post on <a href="/10x_Software_Development/Software_Engineering_Ignorance/" title="Software Engineering Ignorance">Software Engineering Ignorance</a> pointed me to Eric Wise's blog post <a href="http://theruntime.com/blogs/ericwise/archive/2007/06/26/Rejecting-Software-Engineering.aspx">Rejecting Software Engineering</a>. Eric seems like a bright guy, and he's a persuasive writer, but his post is another example of what I was referring to in my earlier post -- that is, people who are uninformed about software engineering spreading misinformation about it. </p>
<p>One of Eric's arguments that is representative of other published arguments is that "software isn't like real engineering." But the "facts" he presents about real engineering are <em>way </em>off base. For example, he asserts early in his post that real engineering has "near perfect information on durability, composition, balance, etc.," but that claim is idealized and not correct. When John Roebling designed the Brooklyn Bridge, for example, the properties of steel cables weren't well understood, and so he used a safety factor of FOUR in designing the cable supports for the bridge. Obviously that is not even close to "near perfect information." Indeed, one of the hallmarks of engineering as opposed to science is that engineers will work with materials whose properties are not entirely understood, and they'll factor in safety margins until the science comes along later and allows more precision in the engineer's use of those materials. </p>
<p>Eric's post goes on to use poor estimation in software as a point against treating software as engineering: "Look at the estimation problems we have in software." Again, this assumes an idealized and incorrect view of other engineering disciplines. Can you say "Big Dig?" The largest "real engineering" project in recent memory, the Big Dig was originally estimated to cost $2.6 billion. The final cost was about $15 billion. [Thanks to an alert reader for pointing out an error in my numbers, now corrected.] In Seattle (where I live), the construction cost of the Seattle Mariner's baseball stadium ended up being nearly double the original estimates. There have been many, many cases like this, which I discuss in my book, <a href="http://www.stevemcconnell.com/est.htm">Software Estimation: Demystifying the Black Art</a>. Estimation error in software is not any better or worse than it is in other branches of engineering -- the central issue is that estimating large, one-of-a-kind artifacts is always going to be subject to a high degree of error. Estimating the 40th similar house you build in a housing development is easy, but so is estimating the 40th similar customized version of a software product you deploy. </p>
<p>Eric's post cites the fact that there are 10:1 differences in programmer productivity as an argument against treating software as engineering. Oddly, Eric cites drywall installers in support of this point, I guess to say that drywall installers are associated with traditional engineering. Here again, Eric needs to check his facts. Has he ever worked with drywall installers? I can tell you from personal experience that there is definitely a 10x difference between drywall installers, especially in terms of quality. Some guys install drywall in a way that makes it nearly impossible to texture well. Other guys get it right the first time, and texturing it is really easy. </p>
<p>The fact is that 10:1 differences in productivity and quality aren't unique to software, and so that fact doesn't differentiate software from engineering or from anything else. There was an interesting study conducted in the 1970s that found that the 80/20 rule applies in virtually every discipline: 20% of the programmers write 80% of the code, 20% of the police detectives make 80% of the arrests, 20% of the NFL quarterbacks make 80% of the touchdown passes, 20% of writers write 80% of the best selling novels, etc. Eric's observation that there are significant differences in productivity is valid, but it doesn't have any bearing on whether software is engineering. </p>
<p>Eric invokes the work of David Parnas as an argument against treating software as engineering, and says Parnas's work in information hiding undermines the openness that is needed for real engineering. I honestly could not follow the logic of his argument on this point. Moreover, Parnas has been one of the earliest and most prominent proponents of treating software as engineering. Indeed, Parnas founded Canada's first undergraduate program in software engineering at McMaster University. </p>
<p>Software engineering already has been defined as engineering, we have an international reference standard for that definition, the field's two largest professional bodies have jointly adopted a professional code of conduct for software engineers, we have accreditation standards for university programs in software engineering, we have university numerous programs that have already been accredited, and several countries are licensing professional engineers in software. </p>
<p>Eric's conclusion that "I don't think software development will ever be able to be defined as engineering in the traditional sense" is a good example of the kind of uninformed opinion that I wrote about in my previous post on this topic. But I don't mean to pick on Eric -- no one can be expected to be well informed about everything. Eric's post simply highlights the fact that rumors of software engineering's death have been greatly exaggerated, and we need to do a better job of spreading the word that, in reality, software engineering is alive and well. </p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-06-28T10:37:00Z</dc:date>
  <content:encoded><![CDATA[<p>A reader of my previous blog post on <a href="https://www.construx.com/10x_Software_Development/Software_Engineering_Ignorance/" title="Software Engineering Ignorance">Software Engineering Ignorance</a> pointed me to Eric Wise's blog post <a href="http://theruntime.com/blogs/ericwise/archive/2007/06/26/Rejecting-Software-Engineering.aspx">Rejecting Software Engineering</a>. Eric seems like a bright guy, and he's a persuasive writer, but his post is another example of what I was referring to in my earlier post -- that is, people who are uninformed about software engineering spreading misinformation about it. </p>
<p>One of Eric's arguments that is representative of other published arguments is that "software isn't like real engineering." But the "facts" he presents about real engineering are <em>way </em>off base. For example, he asserts early in his post that real engineering has "near perfect information on durability, composition, balance, etc.," but that claim is idealized and not correct. When John Roebling designed the Brooklyn Bridge, for example, the properties of steel cables weren't well understood, and so he used a safety factor of FOUR in designing the cable supports for the bridge. Obviously that is not even close to "near perfect information." Indeed, one of the hallmarks of engineering as opposed to science is that engineers will work with materials whose properties are not entirely understood, and they'll factor in safety margins until the science comes along later and allows more precision in the engineer's use of those materials. </p>
<p>Eric's post goes on to use poor estimation in software as a point against treating software as engineering: "Look at the estimation problems we have in software." Again, this assumes an idealized and incorrect view of other engineering disciplines. Can you say "Big Dig?" The largest "real engineering" project in recent memory, the Big Dig was originally estimated to cost $2.6 billion. The final cost was about $15 billion. [Thanks to an alert reader for pointing out an error in my numbers, now corrected.] In Seattle (where I live), the construction cost of the Seattle Mariner's baseball stadium ended up being nearly double the original estimates. There have been many, many cases like this, which I discuss in my book, <a href="http://www.stevemcconnell.com/est.htm">Software Estimation: Demystifying the Black Art</a>. Estimation error in software is not any better or worse than it is in other branches of engineering -- the central issue is that estimating large, one-of-a-kind artifacts is always going to be subject to a high degree of error. Estimating the 40th similar house you build in a housing development is easy, but so is estimating the 40th similar customized version of a software product you deploy. </p>
<p>Eric's post cites the fact that there are 10:1 differences in programmer productivity as an argument against treating software as engineering. Oddly, Eric cites drywall installers in support of this point, I guess to say that drywall installers are associated with traditional engineering. Here again, Eric needs to check his facts. Has he ever worked with drywall installers? I can tell you from personal experience that there is definitely a 10x difference between drywall installers, especially in terms of quality. Some guys install drywall in a way that makes it nearly impossible to texture well. Other guys get it right the first time, and texturing it is really easy. </p>
<p>The fact is that 10:1 differences in productivity and quality aren't unique to software, and so that fact doesn't differentiate software from engineering or from anything else. There was an interesting study conducted in the 1970s that found that the 80/20 rule applies in virtually every discipline: 20% of the programmers write 80% of the code, 20% of the police detectives make 80% of the arrests, 20% of the NFL quarterbacks make 80% of the touchdown passes, 20% of writers write 80% of the best selling novels, etc. Eric's observation that there are significant differences in productivity is valid, but it doesn't have any bearing on whether software is engineering. </p>
<p>Eric invokes the work of David Parnas as an argument against treating software as engineering, and says Parnas's work in information hiding undermines the openness that is needed for real engineering. I honestly could not follow the logic of his argument on this point. Moreover, Parnas has been one of the earliest and most prominent proponents of treating software as engineering. Indeed, Parnas founded Canada's first undergraduate program in software engineering at McMaster University. </p>
<p>Software engineering already has been defined as engineering, we have an international reference standard for that definition, the field's two largest professional bodies have jointly adopted a professional code of conduct for software engineers, we have accreditation standards for university programs in software engineering, we have university numerous programs that have already been accredited, and several countries are licensing professional engineers in software. </p>
<p>Eric's conclusion that "I don't think software development will ever be able to be defined as engineering in the traditional sense" is a good example of the kind of uninformed opinion that I wrote about in my previous post on this topic. But I don't mean to pick on Eric -- no one can be expected to be well informed about everything. Eric's post simply highlights the fact that rumors of software engineering's death have been greatly exaggerated, and we need to do a better job of spreading the word that, in reality, software engineering is alive and well. </p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Software_Engineering_Ignorance/?blogid=23485">
  <title>Software Engineering Ignorance</title>
  <link>https://www.construx.com/10x_Software_Development/Software_Engineering_Ignorance/?blogid=23485</link>
  <description><![CDATA[<p>The February 2007 issue of <em>IEEE Computer</em> contained a column titled "Software Development: What Is the Problem?" (pp. 112, 110-111). The column author asserts,</p>
<p><strong>"Writing and maintaining software are not engineering activities. So it's not clear why we call software development<em> software engineering</em>."</strong></p>
<p>The author then brushes aside any further discussion of software development as engineering and proceeds to base an extended argument on the premise that software development is not engineering. I agree with the author that the specific act of giving instructions to the computer doesn't much resemble engineering. However,the fact that one software development activity out of many doesn't resemble engineering does not imply that software development as a whole doesn't resemble engineering. Numerous software development activities have clear counterparts in other engineering disciplines, including:</p>
<ul>
<li>Problem definition</li>
<li>Creation of models to verify the engineer's understanding of the problem</li>
<li>Feasibility studies to verify viability of design candidates</li>
<li>Design as a central activity</li>
<li>Creation of detailed plans for building the product</li>
<li>Inspections throughout the product-creation effort</li>
<li>Verification that the as-built product matches the product plans</li>
<li>Ongoing interplay between the abstract knowledge used by engineers and the practical knowledge gained during construction</li>
<li>etc.  </li>
</ul>
<p>This list could be much longer, but these items are sufficient to illustrate the point that, even though giving instructions to the computer doesn't have a clear counterpart in other engineering disciplines, many software development activities do have clear counterparts. </p>
<p>Taking a step back from the specific argument, I find it distressing that writers in 2007 are still propagating the myth that software development cannot be treated as engineering. We can certainly debate<em> the value </em>of treating software development as engineering, or software engineering's appropriate <em>areas of applicability</em>; but any debate about whether software development <em>can be </em>treated as engineering ignores the fact that it <em>is </em>being treated as engineering, and deeply so:</p>
<ul>
<li>The Computer Society adopted a Code of Ethics for Software Engineers almost 10 years ago. </li>
<li>The IEEE Computer Society approved the Software Engineering Body of Knowledge 2.0 in 2004, which was adopted as an ISO/IEC Technical Reference 19759:2005. </li>
<li>Curriculum guidelines and accreditation standards have been established for undergraduate software engineering programs. </li>
<li>In the United States the official engineering accreditation board, ABET, has accredited 13 undergraduate software engineering programs since 2003, and in Canada 9 such programs have been accredited (by CEAB). </li>
<li>Numerous provinces in Canada license professional software engineers, and professional engineers are chartered in software in England. </li>
</ul>
<p>It's appropriate and useful to debate <em>in what circumstances </em>should software development be treated as engineering, or what kinds of software development work better when <em>not </em>treated as engineering, or <em>what portion </em>of software development should be treated as engineering, or <em>how engineers in software should be trained</em>, or <em>what proportion of software developers </em>really need to be software <em>engineers </em>-- but arguing whether it's possible to approach software as an engineering discipline is years out of date.</p>
<p>What do you make of the fact that we can have a software engineering body of knowledge that has been adopted as an international standard (ISO/IEC TR 19759:2005), we have bachelor's degree programs in software engineering, we have accreditation standards for those programs, numerous programs have actually been accredited--yet people are still arguing <em>whether </em>software can be treat as engineering? Is the issue simple ignorance, or is it something deeper?</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-06-23T10:15:00Z</dc:date>
  <content:encoded><![CDATA[<p>The February 2007 issue of <em>IEEE Computer</em> contained a column titled "Software Development: What Is the Problem?" (pp. 112, 110-111). The column author asserts,</p>
<p><strong>"Writing and maintaining software are not engineering activities. So it's not clear why we call software development<em> software engineering</em>."</strong></p>
<p>The author then brushes aside any further discussion of software development as engineering and proceeds to base an extended argument on the premise that software development is not engineering. I agree with the author that the specific act of giving instructions to the computer doesn't much resemble engineering. However,the fact that one software development activity out of many doesn't resemble engineering does not imply that software development as a whole doesn't resemble engineering. Numerous software development activities have clear counterparts in other engineering disciplines, including:</p>
<ul>
<li>Problem definition</li>
<li>Creation of models to verify the engineer's understanding of the problem</li>
<li>Feasibility studies to verify viability of design candidates</li>
<li>Design as a central activity</li>
<li>Creation of detailed plans for building the product</li>
<li>Inspections throughout the product-creation effort</li>
<li>Verification that the as-built product matches the product plans</li>
<li>Ongoing interplay between the abstract knowledge used by engineers and the practical knowledge gained during construction</li>
<li>etc.  </li>
</ul>
<p>This list could be much longer, but these items are sufficient to illustrate the point that, even though giving instructions to the computer doesn't have a clear counterpart in other engineering disciplines, many software development activities do have clear counterparts. </p>
<p>Taking a step back from the specific argument, I find it distressing that writers in 2007 are still propagating the myth that software development cannot be treated as engineering. We can certainly debate<em> the value </em>of treating software development as engineering, or software engineering's appropriate <em>areas of applicability</em>; but any debate about whether software development <em>can be </em>treated as engineering ignores the fact that it <em>is </em>being treated as engineering, and deeply so:</p>
<ul>
<li>The Computer Society adopted a Code of Ethics for Software Engineers almost 10 years ago. </li>
<li>The IEEE Computer Society approved the Software Engineering Body of Knowledge 2.0 in 2004, which was adopted as an ISO/IEC Technical Reference 19759:2005. </li>
<li>Curriculum guidelines and accreditation standards have been established for undergraduate software engineering programs. </li>
<li>In the United States the official engineering accreditation board, ABET, has accredited 13 undergraduate software engineering programs since 2003, and in Canada 9 such programs have been accredited (by CEAB). </li>
<li>Numerous provinces in Canada license professional software engineers, and professional engineers are chartered in software in England. </li>
</ul>
<p>It's appropriate and useful to debate <em>in what circumstances </em>should software development be treated as engineering, or what kinds of software development work better when <em>not </em>treated as engineering, or <em>what portion </em>of software development should be treated as engineering, or <em>how engineers in software should be trained</em>, or <em>what proportion of software developers </em>really need to be software <em>engineers </em>-- but arguing whether it's possible to approach software as an engineering discipline is years out of date.</p>
<p>What do you make of the fact that we can have a software engineering body of knowledge that has been adopted as an international standard (ISO/IEC TR 19759:2005), we have bachelor's degree programs in software engineering, we have accreditation standards for those programs, numerous programs have actually been accredited--yet people are still arguing <em>whether </em>software can be treat as engineering? Is the issue simple ignorance, or is it something deeper?</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Classic_Mistakes_Updated/?blogid=23485">
  <title>Classic Mistakes Updated</title>
  <link>https://www.construx.com/10x_Software_Development/Classic_Mistakes_Updated/?blogid=23485</link>
  <description><![CDATA[<p>In <em>Rapid Development </em>I wrote that, "Some ineffective development practices have been chosen so often, by so many people, with such predictable, bad results that they deserve to be called 'Classic Mistakes.'" That was in 1996. At that time I was self-employed and most of my experience had come from working with only a handful of companies.</p>
<span>New Classic Mistakes</span><p>After founding Construx, a decade of work with hundreds of companies has enabled us to identify several new classic mistakes. Here are the additional classic mistakes we've identified:</p>
<ul>
<li>Confusing estimates with targets</li>
<li>Excessive multi-tasking </li>
<li>Assuming global development has a negligible impact on total effort </li>
<li>Unclear project vision</li>
<li>Trusting the map more than the terrain </li>
<li>Outsourcing to reduce cost </li>
<li>Letting a team go dark (replaces the previous "lack of management controls") </li>
</ul>
<span>Next Step: Hard Data on Classic Mistakes </span><p>The next step in our work is to identify just how "classic" these classic mistakes really are. Are they really made very frequently, and is the impact really very bad when the mistakes are made? To answer those questions, we've launched a <a href="https://vovici.com/wsb.dll/s/10431g2996e">C</a><a href="https://vovici.com/wsb.dll/s/10431g2996e">lassic Mistakes Survey</a>, and I invite you to take it. The survey lists 42 classic mistakes and asks you to rank the frequency and severity of each mistake in your experience.</p>
<p>When we have enough responses we'll post the results of the survey. People who complete the survey will receive a summary of the survey at least 30 days sooner than the general public.</p>
<p>So please, <a href="https://vovici.com/wsb.dll/s/10431g2996e">take the survey</a>!</p>
<span>The 36 Original Classic Mistakes</span><p>For the record, the table below lists the original classic mistakes from <em>Rapid Development<em></em>. </em>And here is a link to the <a title="Classic Mistakes, Chapter 3 from Rapid Development, by Steve McConnell" href="http://www.stevemcconnell.com/rdenum.htm">full text</a> of the original Classic Mistakes chapter from <em>Rapid Development</em>).</p>
<table id="table1">
<tbody>
<tr>
<td><strong>People-Related Mistakes</strong>  </td>
<td rowspan="2">  </td>
<td><strong>Process-Related Mistakes</strong>  </td>
<td rowspan="2">  </td>
<td><strong>Product-Related Mistakes</strong>  </td>
<td rowspan="2">  </td>
<td><strong>Technology-Related Mistakes</strong>  </td>
</tr>
<tr>
<td>1. Undermined motivation <p>2. Weak personnel</p>
<p>3. Uncontrolled problem employees</p>
<p>4. Heroics</p>
<p>5. Adding people to a late project </p>
<p>6. Noisy, crowded offices</p>
<p>7. Friction between developers and customers </p>
<p>8. Unrealistic expectations </p>
<p>9. Lack of effective project sponsorship </p>
<p>10. Lack of stakeholder buy-in </p>
<p>11. Lack of user input </p>
<p>12. Politics placed over substance </p>
<p>13. Wishful thinking </p>
</td>
<td>14. Overly optimistic schedules <p>15. Insufficient risk management </p>
<p>16. Contractor failure </p>
<p>17. Insufficient planning </p>
<p>18. Abandonment of planning under pressure </p>
<p>19. Wasted time during the fuzzy front end </p>
<p>20. Shortchanged upstream activities </p>
<p>21. Inadequate design </p>
<p>22. Shortchanged quality assurance </p>
<p>23. Insufficient management controls </p>
<p>24. Premature or too frequent convergence </p>
<p>25. Omitting necessary tasks from estimates </p>
<p>26. Planning to catch up later</p>
<p>27. Code-like-hell programming </p>
</td>
<td>28. Requirements gold-plating <p>29. Feature creep </p>
<p>30. Developer gold-plating </p>
<p>31. Push me, pull me negotiation</p>
<p>32. Research-oriented development </p>
</td>
<td>33. Silver-bullet syndrome <p>34. Overestimated savings from new tools or methods </p>
<p>35. Switching tools in the middle of a project </p>
<p>36. Lack of automated source</p>
</td>
</tr>
</tbody>
</table>
<p><a href="https://vovici.com/wsb.dll/s/10431g2996e"><strong>Take the survey</strong></a>!</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-06-15T22:07:00Z</dc:date>
  <content:encoded><![CDATA[<p>In <em>Rapid Development </em>I wrote that, "Some ineffective development practices have been chosen so often, by so many people, with such predictable, bad results that they deserve to be called 'Classic Mistakes.'" That was in 1996. At that time I was self-employed and most of my experience had come from working with only a handful of companies.</p>
<span>New Classic Mistakes</span><p>After founding Construx, a decade of work with hundreds of companies has enabled us to identify several new classic mistakes. Here are the additional classic mistakes we've identified:</p>
<ul>
<li>Confusing estimates with targets</li>
<li>Excessive multi-tasking </li>
<li>Assuming global development has a negligible impact on total effort </li>
<li>Unclear project vision</li>
<li>Trusting the map more than the terrain </li>
<li>Outsourcing to reduce cost </li>
<li>Letting a team go dark (replaces the previous "lack of management controls") </li>
</ul>
<span>Next Step: Hard Data on Classic Mistakes </span><p>The next step in our work is to identify just how "classic" these classic mistakes really are. Are they really made very frequently, and is the impact really very bad when the mistakes are made? To answer those questions, we've launched a <a href="https://vovici.com/wsb.dll/s/10431g2996e">C</a><a href="https://vovici.com/wsb.dll/s/10431g2996e">lassic Mistakes Survey</a>, and I invite you to take it. The survey lists 42 classic mistakes and asks you to rank the frequency and severity of each mistake in your experience.</p>
<p>When we have enough responses we'll post the results of the survey. People who complete the survey will receive a summary of the survey at least 30 days sooner than the general public.</p>
<p>So please, <a href="https://vovici.com/wsb.dll/s/10431g2996e">take the survey</a>!</p>
<span>The 36 Original Classic Mistakes</span><p>For the record, the table below lists the original classic mistakes from <em>Rapid Development<em></em>. </em>And here is a link to the <a title="Classic Mistakes, Chapter 3 from Rapid Development, by Steve McConnell" href="http://www.stevemcconnell.com/rdenum.htm">full text</a> of the original Classic Mistakes chapter from <em>Rapid Development</em>).</p>
<table id="table1">
<tbody>
<tr>
<td><strong>People-Related Mistakes</strong>  </td>
<td rowspan="2">  </td>
<td><strong>Process-Related Mistakes</strong>  </td>
<td rowspan="2">  </td>
<td><strong>Product-Related Mistakes</strong>  </td>
<td rowspan="2">  </td>
<td><strong>Technology-Related Mistakes</strong>  </td>
</tr>
<tr>
<td>1. Undermined motivation <p>2. Weak personnel</p>
<p>3. Uncontrolled problem employees</p>
<p>4. Heroics</p>
<p>5. Adding people to a late project </p>
<p>6. Noisy, crowded offices</p>
<p>7. Friction between developers and customers </p>
<p>8. Unrealistic expectations </p>
<p>9. Lack of effective project sponsorship </p>
<p>10. Lack of stakeholder buy-in </p>
<p>11. Lack of user input </p>
<p>12. Politics placed over substance </p>
<p>13. Wishful thinking </p>
</td>
<td>14. Overly optimistic schedules <p>15. Insufficient risk management </p>
<p>16. Contractor failure </p>
<p>17. Insufficient planning </p>
<p>18. Abandonment of planning under pressure </p>
<p>19. Wasted time during the fuzzy front end </p>
<p>20. Shortchanged upstream activities </p>
<p>21. Inadequate design </p>
<p>22. Shortchanged quality assurance </p>
<p>23. Insufficient management controls </p>
<p>24. Premature or too frequent convergence </p>
<p>25. Omitting necessary tasks from estimates </p>
<p>26. Planning to catch up later</p>
<p>27. Code-like-hell programming </p>
</td>
<td>28. Requirements gold-plating <p>29. Feature creep </p>
<p>30. Developer gold-plating </p>
<p>31. Push me, pull me negotiation</p>
<p>32. Research-oriented development </p>
</td>
<td>33. Silver-bullet syndrome <p>34. Overestimated savings from new tools or methods </p>
<p>35. Switching tools in the middle of a project </p>
<p>36. Lack of automated source</p>
</td>
</tr>
</tbody>
</table>
<p><a href="https://vovici.com/wsb.dll/s/10431g2996e"><strong>Take the survey</strong></a>!</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Estimation_of_Outsourced_Projects/?blogid=23485">
  <title>Estimation of Outsourced Projects</title>
  <link>https://www.construx.com/10x_Software_Development/Estimation_of_Outsourced_Projects/?blogid=23485</link>
  <description><![CDATA[<p>A question we sometimes hear from our clients is, "My company does outsource software development for other companies. Is there anything special about estimating in that context?" There actually are some distinctive aspects to estimating in the context of preparing a bid or price quote, and I don't discuss that in my book <a href="http://www.stevemcconnell.com/est.htm" target="_blank">Software Estimation</a>.</p>
<span>Estimation in a Time &amp; Materials Context</span><p>Creating estimates to support time and materials bids (i.e., charging by the hour), is only barely a special case because the very structure of T&amp;M implies some variability in the outcome, same as my recommendations for estimating in-house development work. The only real difference if you can even call it a difference is that you have to make doubly sure that you're setting expectations clearly: "This is an estimate. We can't know the outcome with 100% certainty. Actual results will depend on exact details of what you end up requiring and how different issues get prioritized throughout the project," and so on.</p>
<span>Estimation in a Fixed-Price Context</span><p>In contrast, estimation in a fixed-price context is very much a special case. If your estimate causes you to bid too high, you won't get the work. If it causes you to bid too low, you will lose money. Both of these are undesirable outcomes! In other circumstances I usually find myself recommending that people back away from really elaborate estimation approaches because there's so much inherent variability in software projects that the accuracy of your estimates is inherently limited, and you reach the point of diminishing estimation accuracy after you've put in even a little bit of effort. But a fixed price environment, at proposal time, is one of the few circumstances I've run encountered in which an elaborate estimation approach is warranted. And so my first recommendation is, <b>If your business depends on creating fixed price bids, focus on estimation skills as a core competency and treat estimation work as a business-critical function.</b> That means, read <a href="http://www.stevemcconnell.com/est.htm" target="_blank">Software Estimation</a>, take my company's <a href="/Seminars/?dm=0">estimation class</a>, and read <a target="_self" title="other people's estimation books" href="/Resources/Annotated_Bibliography/">other people's estimation books</a>.</p>
<p>My second recommendation is similar to my general recommendation that you separate the "estimate" from the "target." In a fixed price bid context, <b>separate estimation from pricing.</b> The estimate informs the price you'll charge, of course, but there isn't any necessary relationship between the two. You can price a bid at the "unlikely" end of the estimation range if it's really important to you to win the work, and you're willing to lose money on it. Or you can price it way above the estimation range if you think you have an approach that allows you to perform the work at low cost to you and that delivers a higher value to the client.</p>
<p>We've seen lots of companies wrap themselves around the axle when the sales staff insists on lowering the "estimate" to get the work, when really what needs to be lowered is the price. This creates confusion throughout the project. Giving everyone permission to keep estimates and prices separate increases accountability on the sales side (they have to own up to the fact that they're pricing something on the low end of the estimation range and get buy in to do that), and it improves planning on the dev side -- if there's a big gap between the price and the estimate, the project needs to be treated as a higher risk project than if there isn't a large gap. When estimation and pricing are merged into one concept and called "estimation" (even though it isn't really estimation), the project planners can lose the important risk information that arises from the relationship between the price and the estimates.</p>
<p><b>Try not to do the "commitment/pricing" estimate until later in the cone of uncertainty.</b> Of course this is the holy grail, but most companies can't do this with any regularity because they feel that the competitive pressures require them to submit bids in the wider part of the cone.</p>
<p><b>Bid smaller amounts of work when you can, i.e., be more iterative. </b>One of the great benefits of iterative development is the ability to generate project-level data on early iterations that can be used to estimate later iterations with really good accuracy. The companies we've worked with have settled in on 3 iterations as the number needed to calibrate a project team's productivity. Interestingly enough, it doesn't seem to matter whether the iterations are 1 week or 1 month or longer -- it still takes 3 iterations.</p>
<p><b>Consider creating two-phase bids when you can.</b> You can call the first phase "preliminary work", "exploratory phase," "proof of concept phase," "design phase", "Phase 1", etc. The purpose of this phase is to attack all the sources of high variability feeding into the estimates and ultimately deliver a bid for the second part of the project after the cone has been narrowed considerably. We've seen many companies use this approach successfully, although I can't think of any companies we've seen that have been able to use it for the majority of the work they bid on. Again, competitive pressures seem to lead to their using this approach only selectively.</p>
<p><b>Two phase bids can be structured either more "waterfallish" or more "agile". </b>The description above assumes a more linear development approach in which you're trying to get most or all of the requirements defined up front and then bid the whole project. In a more agile approach, you can treat "Phase 1" as an actual design-build-deliver cycle, but structure it into 3 iterations so that you can get good project-level calibration data that you can then use as the basis for bidding the remainder of the project.</p>
<p><b>Collect historical data on your estimates at proposal time vs. the eventual outcomes so that you can build your own cone of uncertainty. </b>The better records you keep about what materials fed into your estimate, the more meaningful your cone will be. For example, you might have really specific requirements for one bid and pretty vague requirements for another. In one sense, if they're both "proposal time" estimates, you might treat them similarly. But if one was supported by significantly more detail in the requirements that implies you're at a different location in the cone, and you'd want to account for that.</p>
<span>Non-Estimation Recommendations</span><p>On this particular topic, several of the most powerful recommendations aren't specifically about estimation; they're about project control.</p>
<p><b>Go highly iterative as early as you can, regardless of whether the bid is structured into one or two phases. </b>Even if you're working to a single-stage bid, there's value to getting project-level calibration data sooner rather than later. If you discover 10% of the way into the project that you've under bid it by a factor of 2, you can go back to the customer sooner and reset their expectations, you can give the customer options that you still have time to act on, you can implement functionality in strict priority order, you can identify the project as a high risk project and manage it accordingly, etc. But if you don't have the project level data that tells you your initial estimates were way off, you'll just run the project as "business as usual", which is really the last thing you want to do.</p>
<p><b>Document assumptions at the contract level, spell them out in as much detail as you can, and then *contain* them.</b> If you build a house, your building contractor might let you specify the kitchen cabinets, but there will be a line item in the contract budget for cabinets. If you end up choosing cabinets that are more expensive, you pay the difference. You typically would have line items for all kinds of things: lighting, landscaping, carpet, flooring, countertops, etc. The areas that are more certain (e.g., roofing, siding, foundation, plumbing) are simply specified. In software projects we also typically have areas that we can specify in detail and other areas that we don't know enough about at contract time to specify in detail. So in software contracts you can include clauses like, "The exact work required in the XYZ module has been budgeted at 40 staff hours. If work on XYZ exceeds its budget, the contract price will be increased correspondingly." I'm not an attorney so I am not recommending this as specific contract language, but hopefully this gives you a general idea about the general kind of clause you would ask your attorney to include in a contract.</p>
<p><b>Manage your set of projects/bids as an investment portfolio, accepting that some will "win" and some will "lose." </b>From a theoretical point of view, if you're estimating early in the cone there just isn't a good answer to improving the accuracy of your estimates on a project-by-project basis. The fact is, your estimates will be off to varying degrees, and when you happen to get one that's pretty accurate it will be a matter of luck, not skill, because of the inherent limits of the Cone. On the other hand, assuming there isn't any bias in the early-in-the-cone estimates (which can be a huge assumption), you can essentially punt on the question of project-by-project profitability and instead focus on portfolio level profitability. Solving the problem of estimating accurately in the wide part of the cone for an individual project isn't even theoretically solvable. But solving the problem of estimating a <i>collection of projects </i>in the wide part of the cone IS solveable. The key to solving that problem is rooting out any systemic bias in those estimates so that the error tendency is neutral. Then with that set of neutral estimates you simply increase each estimate by the amount you'd like your profit margin to be. If you want it to be 10%, you bid 10% higher than your neutral estimate. This will result in your actual project cost coming in higher than some of your estimates and lower than others, but on balance, assuming no systemic bias, you should make a 10% profit on your <i>portfolio </i>of projects.</p>
<p>Of course this requires that you have several projects in your portfolio, and that there aren't just one or two huge projects whose estimation errors could drown out whatever error was contributed by the smaller projects, and that you can afford to take a loss on some percentage of your projects. And those are big assumptions that might not be true in your specific case.</p>
<span>Bottom Line on Estimating in a Fixed-Price Bidding Context</span><p>The bottom line on this particular question is that it isn't possible to solve this particular problem purely using estimation practices. You have to change <i>when </i>you're estimating (later in the Cone), or <i>what </i>you're estimating (e.g., portfolios vs. individual projects), or <i>how many times </i>you estimate (e.g., two-phase bids). And project-control responses (as opposed to estimation responses) and even contract-level responses will probably turn out to be at least as useful as estimation responses.</p>
<p>Resources</p>
<ul>
<li>My estimation book, <a href="http://www.stevemcconnell.com/est.htm" target="_blank">Software Estimation</a> &#160;</li>
<li>My company's <a href="/Seminars/?dm=0">software estimation class</a>, which I teach a few times a year, which is both fun and highly educational </li>
<li>My company's free <a href="/Construx_Estimate/">Construx Estimate</a> estimation software </li>
<li>Estimation <a href="/Thought_Leadership/Events/Practical_benefits_profound_results/">consulting services</a>--we've helped lots of companies improve their estimation practices</li>
<li>Comprehensive <a href="/Resources_On_Software_Estimation/">list of estimation resources</a> on my company's web site, including a link to our Cone of Uncertainty poster, Cost of Estimation Error poster, and numerous other resources</li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-06-06T07:59:00Z</dc:date>
  <content:encoded><![CDATA[<p>A question we sometimes hear from our clients is, "My company does outsource software development for other companies. Is there anything special about estimating in that context?" There actually are some distinctive aspects to estimating in the context of preparing a bid or price quote, and I don't discuss that in my book <a href="http://www.stevemcconnell.com/est.htm" target="_blank">Software Estimation</a>.</p>
<span>Estimation in a Time &amp; Materials Context</span><p>Creating estimates to support time and materials bids (i.e., charging by the hour), is only barely a special case because the very structure of T&amp;M implies some variability in the outcome, same as my recommendations for estimating in-house development work. The only real difference if you can even call it a difference is that you have to make doubly sure that you're setting expectations clearly: "This is an estimate. We can't know the outcome with 100% certainty. Actual results will depend on exact details of what you end up requiring and how different issues get prioritized throughout the project," and so on.</p>
<span>Estimation in a Fixed-Price Context</span><p>In contrast, estimation in a fixed-price context is very much a special case. If your estimate causes you to bid too high, you won't get the work. If it causes you to bid too low, you will lose money. Both of these are undesirable outcomes! In other circumstances I usually find myself recommending that people back away from really elaborate estimation approaches because there's so much inherent variability in software projects that the accuracy of your estimates is inherently limited, and you reach the point of diminishing estimation accuracy after you've put in even a little bit of effort. But a fixed price environment, at proposal time, is one of the few circumstances I've run encountered in which an elaborate estimation approach is warranted. And so my first recommendation is, <b>If your business depends on creating fixed price bids, focus on estimation skills as a core competency and treat estimation work as a business-critical function.</b> That means, read <a href="http://www.stevemcconnell.com/est.htm" target="_blank">Software Estimation</a>, take my company's <a href="https://www.construx.com/Seminars/?dm=0">estimation class</a>, and read <a target="_self" title="other people's estimation books" href="https://www.construx.com/Resources/Annotated_Bibliography/">other people's estimation books</a>.</p>
<p>My second recommendation is similar to my general recommendation that you separate the "estimate" from the "target." In a fixed price bid context, <b>separate estimation from pricing.</b> The estimate informs the price you'll charge, of course, but there isn't any necessary relationship between the two. You can price a bid at the "unlikely" end of the estimation range if it's really important to you to win the work, and you're willing to lose money on it. Or you can price it way above the estimation range if you think you have an approach that allows you to perform the work at low cost to you and that delivers a higher value to the client.</p>
<p>We've seen lots of companies wrap themselves around the axle when the sales staff insists on lowering the "estimate" to get the work, when really what needs to be lowered is the price. This creates confusion throughout the project. Giving everyone permission to keep estimates and prices separate increases accountability on the sales side (they have to own up to the fact that they're pricing something on the low end of the estimation range and get buy in to do that), and it improves planning on the dev side -- if there's a big gap between the price and the estimate, the project needs to be treated as a higher risk project than if there isn't a large gap. When estimation and pricing are merged into one concept and called "estimation" (even though it isn't really estimation), the project planners can lose the important risk information that arises from the relationship between the price and the estimates.</p>
<p><b>Try not to do the "commitment/pricing" estimate until later in the cone of uncertainty.</b> Of course this is the holy grail, but most companies can't do this with any regularity because they feel that the competitive pressures require them to submit bids in the wider part of the cone.</p>
<p><b>Bid smaller amounts of work when you can, i.e., be more iterative. </b>One of the great benefits of iterative development is the ability to generate project-level data on early iterations that can be used to estimate later iterations with really good accuracy. The companies we've worked with have settled in on 3 iterations as the number needed to calibrate a project team's productivity. Interestingly enough, it doesn't seem to matter whether the iterations are 1 week or 1 month or longer -- it still takes 3 iterations.</p>
<p><b>Consider creating two-phase bids when you can.</b> You can call the first phase "preliminary work", "exploratory phase," "proof of concept phase," "design phase", "Phase 1", etc. The purpose of this phase is to attack all the sources of high variability feeding into the estimates and ultimately deliver a bid for the second part of the project after the cone has been narrowed considerably. We've seen many companies use this approach successfully, although I can't think of any companies we've seen that have been able to use it for the majority of the work they bid on. Again, competitive pressures seem to lead to their using this approach only selectively.</p>
<p><b>Two phase bids can be structured either more "waterfallish" or more "agile". </b>The description above assumes a more linear development approach in which you're trying to get most or all of the requirements defined up front and then bid the whole project. In a more agile approach, you can treat "Phase 1" as an actual design-build-deliver cycle, but structure it into 3 iterations so that you can get good project-level calibration data that you can then use as the basis for bidding the remainder of the project.</p>
<p><b>Collect historical data on your estimates at proposal time vs. the eventual outcomes so that you can build your own cone of uncertainty. </b>The better records you keep about what materials fed into your estimate, the more meaningful your cone will be. For example, you might have really specific requirements for one bid and pretty vague requirements for another. In one sense, if they're both "proposal time" estimates, you might treat them similarly. But if one was supported by significantly more detail in the requirements that implies you're at a different location in the cone, and you'd want to account for that.</p>
<span>Non-Estimation Recommendations</span><p>On this particular topic, several of the most powerful recommendations aren't specifically about estimation; they're about project control.</p>
<p><b>Go highly iterative as early as you can, regardless of whether the bid is structured into one or two phases. </b>Even if you're working to a single-stage bid, there's value to getting project-level calibration data sooner rather than later. If you discover 10% of the way into the project that you've under bid it by a factor of 2, you can go back to the customer sooner and reset their expectations, you can give the customer options that you still have time to act on, you can implement functionality in strict priority order, you can identify the project as a high risk project and manage it accordingly, etc. But if you don't have the project level data that tells you your initial estimates were way off, you'll just run the project as "business as usual", which is really the last thing you want to do.</p>
<p><b>Document assumptions at the contract level, spell them out in as much detail as you can, and then *contain* them.</b> If you build a house, your building contractor might let you specify the kitchen cabinets, but there will be a line item in the contract budget for cabinets. If you end up choosing cabinets that are more expensive, you pay the difference. You typically would have line items for all kinds of things: lighting, landscaping, carpet, flooring, countertops, etc. The areas that are more certain (e.g., roofing, siding, foundation, plumbing) are simply specified. In software projects we also typically have areas that we can specify in detail and other areas that we don't know enough about at contract time to specify in detail. So in software contracts you can include clauses like, "The exact work required in the XYZ module has been budgeted at 40 staff hours. If work on XYZ exceeds its budget, the contract price will be increased correspondingly." I'm not an attorney so I am not recommending this as specific contract language, but hopefully this gives you a general idea about the general kind of clause you would ask your attorney to include in a contract.</p>
<p><b>Manage your set of projects/bids as an investment portfolio, accepting that some will "win" and some will "lose." </b>From a theoretical point of view, if you're estimating early in the cone there just isn't a good answer to improving the accuracy of your estimates on a project-by-project basis. The fact is, your estimates will be off to varying degrees, and when you happen to get one that's pretty accurate it will be a matter of luck, not skill, because of the inherent limits of the Cone. On the other hand, assuming there isn't any bias in the early-in-the-cone estimates (which can be a huge assumption), you can essentially punt on the question of project-by-project profitability and instead focus on portfolio level profitability. Solving the problem of estimating accurately in the wide part of the cone for an individual project isn't even theoretically solvable. But solving the problem of estimating a <i>collection of projects </i>in the wide part of the cone IS solveable. The key to solving that problem is rooting out any systemic bias in those estimates so that the error tendency is neutral. Then with that set of neutral estimates you simply increase each estimate by the amount you'd like your profit margin to be. If you want it to be 10%, you bid 10% higher than your neutral estimate. This will result in your actual project cost coming in higher than some of your estimates and lower than others, but on balance, assuming no systemic bias, you should make a 10% profit on your <i>portfolio </i>of projects.</p>
<p>Of course this requires that you have several projects in your portfolio, and that there aren't just one or two huge projects whose estimation errors could drown out whatever error was contributed by the smaller projects, and that you can afford to take a loss on some percentage of your projects. And those are big assumptions that might not be true in your specific case.</p>
<span>Bottom Line on Estimating in a Fixed-Price Bidding Context</span><p>The bottom line on this particular question is that it isn't possible to solve this particular problem purely using estimation practices. You have to change <i>when </i>you're estimating (later in the Cone), or <i>what </i>you're estimating (e.g., portfolios vs. individual projects), or <i>how many times </i>you estimate (e.g., two-phase bids). And project-control responses (as opposed to estimation responses) and even contract-level responses will probably turn out to be at least as useful as estimation responses.</p>
<p>Resources</p>
<ul>
<li>My estimation book, <a href="http://www.stevemcconnell.com/est.htm" target="_blank">Software Estimation</a> &#160;</li>
<li>My company's <a href="https://www.construx.com/Seminars/?dm=0">software estimation class</a>, which I teach a few times a year, which is both fun and highly educational </li>
<li>My company's free <a href="https://www.construx.com/Construx_Estimate/">Construx Estimate</a> estimation software </li>
<li>Estimation <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">consulting services</a>--we've helped lots of companies improve their estimation practices</li>
<li>Comprehensive <a href="https://www.construx.com/Resources_On_Software_Estimation/">list of estimation resources</a> on my company's web site, including a link to our Cone of Uncertainty poster, Cost of Estimation Error poster, and numerous other resources</li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Software_Compensation_2007--Is_it_1999_All_Over_Again_/?blogid=23485">
  <title>Software Compensation 2007--Is it 1999 All Over Again?</title>
  <link>https://www.construx.com/10x_Software_Development/Software_Compensation_2007--Is_it_1999_All_Over_Again_/?blogid=23485</link>
  <description><![CDATA[<p>A comment I'm hearing with increasing frequency is "The job market is getting to be like the dot com era all over again. Developer salaries are increasing, and it's getting harder and harder to attract and retain good developers." Our May <a href="/Executive_Council_Software_Excellence/">ECSE Meeting</a> focused on the topic of "Compensation, Recruiting, and Retention," and so I used that as an opportunity to dig into the question of "Is it really 1999 all over again?"</p>
<p>The first question is, <strong>Is developer compensation increasing?</strong> I think quite clearly it is. The consensus raise for 2006 was about 3.5%-4.0%. The raises being budgeted for 2007 are more variable -- I've heard a low of 3.0% and a high of 6-7%. (These figures are all North American figures. Figures in India, Russia, and Eastern Europe can be very different.) But these are not the unprecedented raises we saw in 1998-1999; they're more incremental. Note too that a "budgeted raise of 5%" doesn't mean everyone will get 5%. People who are top performers will tend to get more than that. People who's compensation has gotten behind the market will tend to get higher raises too.</p>
<p><strong>What is current developer compensation? </strong>Most of my data here is from the Seattle area. In the Seattle area, developer comp typically ranges from about $60K to about $120K, with very few people (less than 5% of the most senior people) making more than $120K. Fresh outs are being hired at $50-$60K in our area. East coast salaries tend to be similar, with higher salaries in more expensive areas (e.g., Manhattan). Salaries in less populated areas tend to be somewhat lower.</p>
<p><strong>Bonuses. </strong>Most employers report annual bonuses of 5-15% for purely technical positions, with most companies paying closer to 5% than 15%. For very senior technical people and upper-level managers (i.e., Directors and VPs), bonuses can go higher than 15%, and in a few cases quite a bit higher. One company reported going as high as 50%. Most companies give higher-percentage bonuses to more senior people, although some don't differentiate on the basis of seniority.</p>
<p><strong>Standard Benefits.</strong> We see a lot of commonality in benefits at this time. Fully-paid health coverage for employees seems to be standard among software employers. Partial coverage of dependent medical premiums seems to be common, with a few companies paying 100%. Starting vacation of 3 weeks is typical, with some companies offering only 2 weeks. Vacation increasing by an additional week after 5 years also seems to be typical. Vacation policies are almost always based on longevity with the company, and most managers have little flexibility in varying vacation policy.</p>
<p><strong>Other Benefits. </strong>We discussed signing bonuses, stock options, stock grants, and other more elaborate perq's. Signing bonuses appear to be rare, still very much the exception rather than the rule. Most employers report that prospective hires are showing little interest in stock options. Apparently the memories of the dot com collapse are still fresh enough that many people would still rather have the bird in the hand of cash now rather than the bird in the bush of equity that might be worth a lot more later. Many companies sponsor occasional low-key "morale events" such as tickets to a baseball game, dinner out, pizza and beer at the office, and that kind of thing. Other more exotic and expensive perq's seem not to be reappearing at this time.</p>
<p><strong>Hiring wars.</strong> A few companies reported losing key people, and in a few cases to "crazy offers that it just doesn't make sense to try to match." After quite a bit of discussion on this point at the ECSE meetings, the consensus seemed to be that these extreme compensation packages were more the result of a specific overactive recruiter than a symptom of the job market overall. Several companies in our area (Seattle) have reported losing staff to the most actively hiring companies (especially Google and Yahoo), but even in these cases the salaries offered were something like 20% higher, which doesn't seem to be symptomatic of any overheating in the job market. There have also been a few reported cases of very experienced people getting more than one job offer at a time, but again these seem to be the exceptions.</p>
<p><strong>So, is it 1999 all over again? </strong>I think it clearly is not 1999 all over again. What we're seeing is healthy competition for top talent, which is really business as usual -- and business as it should be. We aren't seeing elaborate perqs -- no onsite massages, concierge service, nights out in limousines, and so on. We're not seeing hiring wars for average talent -- remember in 1999 we had hiring wars even for people whose only skill was writing basic HTML. We're not seeing huge equity grants or promises of ridiculous wealth in short time frames. People seem to have already forgotten how crazy 1998 and 1999 were. One ECSE member commented that people aren't currently "Expecting to work for five years and then be able to retire." My recollection is that people at that time expected to work for <em>two </em>years and then retire! The market was unbalanced in favor of employees -- to a degree that was unhealthy, because businesses were constantly confronted with unpredictable escalations in salaries, unexpected losses of key staff, uncontrollably high turnover. There was so much chaos in the job market that businesses had difficulty finding time to actually focus on their business.</p>
<p>In 2001 through 2002 or 2003 (depending on where in the country you were), we saw a job market that was unbalanced in favor of employers. There were so few open positions available, and the software personnel who had good jobs were so reluctant to change jobs, that even some qualified people had trouble finding work. That wasn't healthy either because it can cause talented, qualified people to leave the field.</p>
<p><strong>Job Market 2007. </strong>What we are seeing today is that the best employees can command a premium, but they can't be unreasonable. Average employees can find jobs but probably aren't going to get multiple offers. The worst employees are going to struggle to find jobs at all.</p>
<p>That all sounds to me like a healthy, sustainable equilibrium -- a balance of power between employers and employees. I would be happy to see that balance continue for the foreseeable future.</p>
<p><strong>Resources</strong></p>
<ul>
<li>US Dept. of Labor, Bureau of Labor Statistics, <a href="http://www.bls.gov/OES/">Occupational Employment Statistics</a> -- amazing amount of data on this website, some broken down by state and city. </li>
<li>Job outlook for <a href="http://www.bls.gov/oes/current/oes151021.htm">Computer Programmers</a></li>
<li>Job outlook for <a href="http://www.bls.gov/oes/current/oes151031.htm">Software Engineers focusing on Applications</a></li>
<li>Job outlook for <a href="http://www.bls.gov/oes/current/oes151032.htm">Software Engineers focusing on Systems Software</a></li>
<li>Payscale's <a href="http://www.payscale.com/research/US/Job=Sr._Software_Engineer_%2f_Developer_%2f_Programmer/Salary">salary survey</a> for senior software engineers / developers / programmers</li>
<li>"Orphans Preferred," Chapter from my book <em>Professional Software Development</em> on attributes including job prospects for software personnel: [<a href="http://www.stevemcconnell.com/psd/07-orphanspreferred.htm">html</a>] [<a href="/uploadedFiles/Construx/Construx_Content/Blogs/07-OrphansPreferred.pdf">pdf</a>]</li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-05-31T10:01:00Z</dc:date>
  <content:encoded><![CDATA[<p>A comment I'm hearing with increasing frequency is "The job market is getting to be like the dot com era all over again. Developer salaries are increasing, and it's getting harder and harder to attract and retain good developers." Our May <a href="https://www.construx.com/Executive_Council_Software_Excellence/">ECSE Meeting</a> focused on the topic of "Compensation, Recruiting, and Retention," and so I used that as an opportunity to dig into the question of "Is it really 1999 all over again?"</p>
<p>The first question is, <strong>Is developer compensation increasing?</strong> I think quite clearly it is. The consensus raise for 2006 was about 3.5%-4.0%. The raises being budgeted for 2007 are more variable -- I've heard a low of 3.0% and a high of 6-7%. (These figures are all North American figures. Figures in India, Russia, and Eastern Europe can be very different.) But these are not the unprecedented raises we saw in 1998-1999; they're more incremental. Note too that a "budgeted raise of 5%" doesn't mean everyone will get 5%. People who are top performers will tend to get more than that. People who's compensation has gotten behind the market will tend to get higher raises too.</p>
<p><strong>What is current developer compensation? </strong>Most of my data here is from the Seattle area. In the Seattle area, developer comp typically ranges from about $60K to about $120K, with very few people (less than 5% of the most senior people) making more than $120K. Fresh outs are being hired at $50-$60K in our area. East coast salaries tend to be similar, with higher salaries in more expensive areas (e.g., Manhattan). Salaries in less populated areas tend to be somewhat lower.</p>
<p><strong>Bonuses. </strong>Most employers report annual bonuses of 5-15% for purely technical positions, with most companies paying closer to 5% than 15%. For very senior technical people and upper-level managers (i.e., Directors and VPs), bonuses can go higher than 15%, and in a few cases quite a bit higher. One company reported going as high as 50%. Most companies give higher-percentage bonuses to more senior people, although some don't differentiate on the basis of seniority.</p>
<p><strong>Standard Benefits.</strong> We see a lot of commonality in benefits at this time. Fully-paid health coverage for employees seems to be standard among software employers. Partial coverage of dependent medical premiums seems to be common, with a few companies paying 100%. Starting vacation of 3 weeks is typical, with some companies offering only 2 weeks. Vacation increasing by an additional week after 5 years also seems to be typical. Vacation policies are almost always based on longevity with the company, and most managers have little flexibility in varying vacation policy.</p>
<p><strong>Other Benefits. </strong>We discussed signing bonuses, stock options, stock grants, and other more elaborate perq's. Signing bonuses appear to be rare, still very much the exception rather than the rule. Most employers report that prospective hires are showing little interest in stock options. Apparently the memories of the dot com collapse are still fresh enough that many people would still rather have the bird in the hand of cash now rather than the bird in the bush of equity that might be worth a lot more later. Many companies sponsor occasional low-key "morale events" such as tickets to a baseball game, dinner out, pizza and beer at the office, and that kind of thing. Other more exotic and expensive perq's seem not to be reappearing at this time.</p>
<p><strong>Hiring wars.</strong> A few companies reported losing key people, and in a few cases to "crazy offers that it just doesn't make sense to try to match." After quite a bit of discussion on this point at the ECSE meetings, the consensus seemed to be that these extreme compensation packages were more the result of a specific overactive recruiter than a symptom of the job market overall. Several companies in our area (Seattle) have reported losing staff to the most actively hiring companies (especially Google and Yahoo), but even in these cases the salaries offered were something like 20% higher, which doesn't seem to be symptomatic of any overheating in the job market. There have also been a few reported cases of very experienced people getting more than one job offer at a time, but again these seem to be the exceptions.</p>
<p><strong>So, is it 1999 all over again? </strong>I think it clearly is not 1999 all over again. What we're seeing is healthy competition for top talent, which is really business as usual -- and business as it should be. We aren't seeing elaborate perqs -- no onsite massages, concierge service, nights out in limousines, and so on. We're not seeing hiring wars for average talent -- remember in 1999 we had hiring wars even for people whose only skill was writing basic HTML. We're not seeing huge equity grants or promises of ridiculous wealth in short time frames. People seem to have already forgotten how crazy 1998 and 1999 were. One ECSE member commented that people aren't currently "Expecting to work for five years and then be able to retire." My recollection is that people at that time expected to work for <em>two </em>years and then retire! The market was unbalanced in favor of employees -- to a degree that was unhealthy, because businesses were constantly confronted with unpredictable escalations in salaries, unexpected losses of key staff, uncontrollably high turnover. There was so much chaos in the job market that businesses had difficulty finding time to actually focus on their business.</p>
<p>In 2001 through 2002 or 2003 (depending on where in the country you were), we saw a job market that was unbalanced in favor of employers. There were so few open positions available, and the software personnel who had good jobs were so reluctant to change jobs, that even some qualified people had trouble finding work. That wasn't healthy either because it can cause talented, qualified people to leave the field.</p>
<p><strong>Job Market 2007. </strong>What we are seeing today is that the best employees can command a premium, but they can't be unreasonable. Average employees can find jobs but probably aren't going to get multiple offers. The worst employees are going to struggle to find jobs at all.</p>
<p>That all sounds to me like a healthy, sustainable equilibrium -- a balance of power between employers and employees. I would be happy to see that balance continue for the foreseeable future.</p>
<p><strong>Resources</strong></p>
<ul>
<li>US Dept. of Labor, Bureau of Labor Statistics, <a href="http://www.bls.gov/OES/">Occupational Employment Statistics</a> -- amazing amount of data on this website, some broken down by state and city. </li>
<li>Job outlook for <a href="http://www.bls.gov/oes/current/oes151021.htm">Computer Programmers</a></li>
<li>Job outlook for <a href="http://www.bls.gov/oes/current/oes151031.htm">Software Engineers focusing on Applications</a></li>
<li>Job outlook for <a href="http://www.bls.gov/oes/current/oes151032.htm">Software Engineers focusing on Systems Software</a></li>
<li>Payscale's <a href="http://www.payscale.com/research/US/Job=Sr._Software_Engineer_%2f_Developer_%2f_Programmer/Salary">salary survey</a> for senior software engineers / developers / programmers</li>
<li>"Orphans Preferred," Chapter from my book <em>Professional Software Development</em> on attributes including job prospects for software personnel: [<a href="http://www.stevemcconnell.com/psd/07-orphanspreferred.htm">html</a>] [<a href="https://www.construx.com/uploadedFiles/Construx/Construx_Content/Blogs/07-OrphansPreferred.pdf">pdf</a>]</li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Cone_of_Uncertainty_Controversy/?blogid=23485">
  <title>Cone of Uncertainty Controversy</title>
  <link>https://www.construx.com/10x_Software_Development/Cone_of_Uncertainty_Controversy/?blogid=23485</link>
  <description><![CDATA[<p align="left"> </p>
<p><span style="font-family: Sabon-Roman; font-size: x-small;"></span></p>
<p align="left">The May/June 2006 issue of <em>IEEE Software </em>published an <a title="&quot;Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty,&quot; Todd Little" href="http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/mags/so/&amp;toc=comp/mags/so/2006/03/s3toc.xml" target="_blank">interesting article</a> that analyzed the estimation results of an extensive set of projects from Landmark Graphics. The author, Todd Little, analyzed the relationships between estimated outcomes and actual outcomes. Based on his data, he concluded that the 80% confident range of estimates did <em>not </em>reduce as the <a title="Cone of Uncertainty" href="http://www.construx.com/Page.aspx?hid=1648" target="_blank">Cone of Uncertainty</a> implies, but that the estimates continued to vary by about a factor of 3-4 for the remaining work on the project -- regardless of when in the project the estimate was created. </p>
<font size="2" face="Sabon-Roman"><p align="left">There are some interesting takaways from the article"s data, and some of its conclusions are supported by the data, whereas others are not. The basic issue with the article"s data is that it represents estimation accuracy <em>as estimation commonly occurs in practice </em>rather than </p>
</font><p align="left">estimation accuracy <span style="font-family: Sabon-Roman; font-size: x-small;"><em>when estimation is done well</em>. </span></p>
<p> </p>
<p> </p>
<p align="left"><span style="font-family: Sabon-Roman; font-size: x-small;">Figure 5 in Little"s article is particularly interesting:</span></p>
<p align="left"><span style="font-family: Sabon-Roman; font-size: x-small;"><img style="width: 389px; height: 225px;" title="Figure 5" alt="Figure 5" src="/blogs/stevemcc/Figure5.jpg" /><br /></span><strong>Figure 5 from "Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty." </strong></p>
<p align="left"> </p>
<p><span style="font-family: Sabon-Roman; font-size: x-small;">Figure 5 shows a scatter plot of estimates created at different points in a project"s duration. The scatter plot forms a near perfect cone--but only the half of the Cone that represents underestimation! There is only a tiny scattering of points that represent overestimation (those below the 1.0 line). As a view of estimation in practice, this is consistent with data my company has seen from many of our clients. It supports the conclusion that the software industry doesn"t have a <em>neutral </em>estimation problem; it has an <em>underestimation </em>problem. (This is my conclusion, not the article"s.)</span><span style="font-family: Sabon-Roman; font-size: x-small;"></span></p>
<p align="left">The article"s conclusions about the Cone of Uncertainty are less well supported. With reference to Figure 5, Little makes the observation that it forms a visual Cone, but only because the graph plots "estimated remaining duration" vs. "current position in the schedule." He points out that, since the duration remaining decreases as the project progresses, smaller estimation errors later in a project are not necessarily better. For the improved estimates to be accurate (i.e., for the Cone to be true), the estimates would need to be more accurate on a percentage-remaining basis, not just have a smaller absolute error. That analysis is all correct as far as I am concerned. </p>
<p align="left">The article then goes on to point out that the relative error of the Landmark estimates didn"t actually decrease, and concludes </p>
<font size="2" face="Sabon-Roman"><blockquote><p align="left">"While the data supports some aspects of the cone of uncertainty, it doesn&amp;rsquo;t support the most common  conclusion that uncertainty significantly decreases as the project progresses. Instead, I found that relative remaining uncertainty was essentially constant over the project&amp;rsquo;s life."</p>
</blockquote>
<p align="left">There are two reasons that this particular conclusion can"t be drawn from Landmark"s underlying data. </p>
<p align="left">First, the article misstates the "common conclusion" about the Cone. As I&amp;rsquo;ve emphasized when I&amp;rsquo;ve <a href="http://www.stevemcconnell.com/est.htm" target="_blank">written about it</a>, the Cone represents <em><font size="2" face="Sabon-Italic">best-case </font><font size="2" face="Sabon-Roman">estimation accuracy; it&amp;rsquo;s easily possible to do worse&amp;mdash;as many organizations have demonstrated for decades. Anyone who"s ever worked on a project that got to "3 weeks from completion," and then slipped 6 weeks, and then got to "3 weeks from completion" again, and then slipped another 6 weeks, knows that uncertainty doesn"t automatically decrease as a project progresses. The Cone is </font><font size="2" face="Sabon-Roman">a hope, but not a promise. Little"s data simply says that the estimates in the Landmark data set weren"t very accurate. It"s interesting to have this data put into the public eye, but it doesn"t tell us anything we didn"t already know. It tells us that software projects are routinely underestimated by a lot, and that projects aren"t necessarily estimated any better at the end than they were at the beginning. That"s a useful reminder, as long as we don"t stretch the conclusions beyond what the underlying data supports. </font></em></p>
<font size="2" face="Sabon-Roman"><p align="left">The second problem with the conclusion the article draws about the Cone is that it doesn&amp;rsquo;t account for the effect of iterative development. Although it isn"t stated in the published article, an earlier draft of the article, circulated on the Internet in mid 2003, emphasized that the projects in the data set were using agile practices, and in particular that they emphasized responding to change over performing to plan. In other words, the projects in this data set experienced significant requirements churn. </p>
</font></font><p align="left"> </p>
<span style="font-family: Sabon-Roman; font-size: x-small;">If the projects averaged 329 days as the article says, and if they followed agile practices as Little described in the 2003 version, there could easily be five to 10 iterations within each project. But the Cone applies to single iterations of the requirements-design-build-test process. For an analysis of the Cone of Uncertainty to be meaningful in a highly iterative  context, the article would need to account for the effect of iteration on the Cone by looking at each iteration separately -- that is, by looking at 1-2 month iterations rather than looking at 329-day-long projects. The 329 day long projects are essentially sequences of little projects, so the way the Cone of Uncertainty applies in this case is that there isn"t one big 329-day Cone; there are 6-12 1-2 month Cones instead. Unfortunately, the article doesn"t present the iteration data; it presents only the rolled-up 329 data, which is unfortunately meaningless in terms of drawing any conclusions about how the Cone affects estimation accuracy over the course of a project. </span><p> </p>
<p> </p>
<p align="left">The fact that requirements were treated in a highly iterative way also forces a reexamination of Figure 5. While it makes sense initially to treat Figure 5 as evidence of systemic underestimation, that conclusion can"t be drawn either, because the requirements changed significantly over the course of the average 329 day project, and so whatever was delivered at the end of the project was not the same thing that was estimated at the beginning of the project, and that makes the early-project estimates and the late-in-the-project estimates an apples-to-oranges comparison, i.e., not meaningful. </p>
<p align="left">Little makes an interesting comment at the end of the article that I think is a good takeaway overall. He points out that some of the variation in estimation accuracy was due to "a corproate culture using targets as estimates." Figure 5 might not provide a meaningful view of estimation accuracy, but it can certainly be interpreted as an indication that projects tend to set aggressive targets and then repeatedly fail to meet those targets. That"s something we already knew, too, but it"s good to have a reminder, and it"s good to see that reminder supported with some data. </p>
<p align="left"><strong>Resources</strong></p>
<ul>
<li><div align="left">My <a href="http://www.stevemcconnell.com/est.htm" target="_blank">estimation book</a>, which discusses the Cone in detail </div>
</li>
<li><div align="left">Little"s Article: "<a href="http://doi.ieeecomputersociety.org/10.1109/MS.2006.82 " target="_blank">Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty</a>" (IEEE Computer Society MDLS membership required to access the article -- this goes to the abstract page)</div>
</li>
<li><div align="left"><a href="http://csdl2.computer.org/comp/mags/so/2006/05/s5008.pdf" target="_blank">Letters to the Editor</a> responding to Little"s article </div>
</li>
<li><div align="left">Construx"s Cone of Uncertainty <a href="http://www.construx.com/Page.aspx?hid=1648">white paper</a></div>
</li>
<li><div align="left">Construx"s <a href="http://www.construx.com/Page.aspx?hid=1448">Cone of Uncertainty</a> poster </div>
</li>
<li><div align="left"><a href="http://www.construx.com/Page.aspx?nid=15&amp;id=32">Software Estimation In Depth</a> seminar </div>
</li>
<li><div align="left">Construx"s <a href="http://www.construx.com/Page.aspx?hid=484">estimation consulting</a>  </div>
</li>
<li><div align="left">Other <a href="http://www.construx.com/Page.aspx?nid=297">estimation resources</a></div>
</li>
</ul>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-05-23T10:02:00Z</dc:date>
  <content:encoded><![CDATA[<p>The May/June 2006 issue of <em>IEEE Software </em>published an <a target="_blank" href="http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/mags/so/&amp;toc=comp/mags/so/2006/03/s3toc.xml" title="&quot;Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty,&quot; Todd Little">interesting article</a> that analyzed the estimation results of an extensive set of projects from Landmark Graphics. The author, Todd Little, analyzed the relationships between estimated outcomes and actual outcomes. Based on his data, he concluded that the 80% confident range of estimates did <em>not </em>reduce as the <a target="_blank" href="https://www.construx.com/Thought_Leadership/Books/The_Cone_of_Uncertainty/" title="Cone of Uncertainty">Cone of Uncertainty</a> implies, but that the estimates continued to vary by about a factor of 3-4 for the remaining work on the project -- regardless of when in the project the estimate was created.</p>
<p>There are some interesting takaways from the article's data, and some of its conclusions are supported by the data, whereas others are not. The basic issue with the article's data is that it represents estimation accuracy <em>as estimation commonly occurs in practice </em>rather than estimation accuracy<em>when estimation is done well</em>. Figure 5 in Little's article is particularly interesting:</p>
<p><img width="389" height="225" src="https://www.construx.com/uploadedImages/Figure5.jpg" alt="Figure 5" title="Figure 5" /><br /><strong>Figure 5 from "Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty." </strong></p>
<p>Figure 5 shows a scatter plot of estimates created at different points in a project's duration. The scatter plot forms a near perfect cone--but only the half of the Cone that represents underestimation! There is only a tiny scattering of points that represent overestimation (those below the 1.0 line). As a view of estimation in practice, this is consistent with data my company has seen from many of our clients. It supports the conclusion that the software industry doesn't have a <em>neutral </em>estimation problem; it has an <em>underestimation </em>problem. (This is my conclusion, not the article's.)</p>
<p>The article's conclusions about the Cone of Uncertainty are less well supported. With reference to Figure 5, Little makes the observation that it forms a visual Cone, but only because the graph plots "estimated remaining duration" vs. "current position in the schedule." He points out that, since the duration remaining decreases as the project progresses, smaller estimation errors later in a project are not necessarily better. For the improved estimates to be accurate (i.e., for the Cone to be true), the estimates would need to be more accurate on a percentage-remaining basis, not just have a smaller absolute error. That analysis is all correct as far as I am concerned.</p>
<p>The article then goes on to point out that the relative error of the Landmark estimates didn't actually decrease, and concludes</p>
<p>"While the data supports some aspects of the cone of uncertainty, it doesn't support the most common conclusion that uncertainty significantly decreases as the project progresses. Instead, I found that relative remaining uncertainty was essentially constant over the project's life."</p>
<p>There are two reasons that this particular conclusion can't be drawn from Landmark's underlying data.</p>
<p>First, the article misstates the "common conclusion" about the Cone. As I've emphasized when I've <a target="_blank" href="http://www.stevemcconnell.com/est.htm">written about it</a>, the Cone represents best-case <em>estimation accuracy; it's easily possible to do worse--as many organizations have demonstrated for decades. </em>Anyone who's ever worked on a project that got to "3 weeks from completion," and then slipped 6 weeks, and then got to "3 weeks from completion" again, and then slipped another 6 weeks, knows that uncertainty doesn't automatically decrease as a project progresses. The Cone is a hope, but not a promise. Little's data simply says that the estimates in the Landmark data set weren't very accurate. It's interesting to have this data put into the public eye, but it doesn't tell us anything we didn't already know. It tells us that software projects are routinely underestimated by a lot, and that projects aren't necessarily estimated any better at the end than they were at the beginning. That's a useful reminder, as long as we don't stretch the conclusions beyond what the underlying data supports.</p>
<p>The second problem with the conclusion the article draws about the Cone is that it doesn't account for the effect of iterative development. Although it isn't stated in the published article, an earlier draft of the article, circulated on the Internet in mid 2003, emphasized that the projects in the data set were using agile practices, and in particular that they emphasized responding to change over performing to plan. In other words, the projects in this data set experienced significant requirements churn.</p>
<p>If the projects averaged 329 days as the article says, and if they followed agile practices as Little described in the 2003 version, there could easily be five to 10 iterations within each project. But the Cone applies to single iterations of the requirements-design-build-test process. For an analysis of the Cone of Uncertainty to be meaningful in a highly iterative context, the article would need to account for the effect of iteration on the Cone by looking at each iteration separately -- that is, by looking at 1-2 month iterations rather than looking at 329-day-long projects. The 329 day long projects are essentially sequences of little projects, so the way the Cone of Uncertainty applies in this case is that there isn't one big 329-day Cone; there are 6-12 1-2 month Cones instead. Unfortunately, the article doesn't present the iteration data; it presents only the rolled-up 329 data, which is unfortunately meaningless in terms of drawing any conclusions about how the Cone affects estimation accuracy over the course of a project.</p>
<p>The fact that requirements were treated in a highly iterative way also forces a reexamination of Figure 5. While it makes sense initially to treat Figure 5 as evidence of systemic underestimation, that conclusion can't be drawn either, because the requirements changed significantly over the course of the average 329 day project, and so whatever was delivered at the end of the project was not the same thing that was estimated at the beginning of the project, and that makes the early-project estimates and the late-in-the-project estimates an apples-to-oranges comparison, i.e., not meaningful.</p>
<p>Little makes an interesting comment at the end of the article that I think is a good takeaway overall. He points out that some of the variation in estimation accuracy was due to "a corporate culture using targets as estimates." Figure 5 might not provide a meaningful view of estimation accuracy, but it can certainly be interpreted as an indication that projects tend to set aggressive targets and then repeatedly fail to meet those targets. That's something we already knew, too, but it's good to have a reminder, and it's good to see that reminder supported with some data.</p>
<p><strong>Resources</strong></p>
<ul>
<li>My <a target="_blank" href="http://www.stevemcconnell.com/est.htm">estimation book</a>, which discusses the Cone in detail </li>
<li>Little's Article: "<a target="_blank" href="http://doi.ieeecomputersociety.org/10.1109/MS.2006.82%20">Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty</a>" (IEEE Computer Society MDLS membership required to access the article -- this goes to the abstract page) </li>
<li><a target="_blank" href="http://csdl2.computer.org/comp/mags/so/2006/05/s5008.pdf">Letters to the Editor</a> responding to Little's article </li>
<li>Construx's Cone of Uncertainty <a href="https://www.construx.com/Thought_Leadership/Books/The_Cone_of_Uncertainty/">white paper</a>  </li>
<li>Construx's <a href="https://www.construx.com/Resources/Posters/Cone_of_Uncertainty/">Cone of Uncertainty</a> poster </li>
<li><a href="https://www.construx.com/Seminars/?dm=0">Software Estimation In Depth</a> seminar </li>
<li>Construx's <a href="https://www.construx.com/Thought_Leadership/Events/Practical_benefits_profound_results/">estimation consulting</a>  </li>
<li>Other <a href="https://www.construx.com/Resources_On_Software_Estimation/">estimation resources</a>  </li>
</ul>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Is_Faster_Always_Faster_/?blogid=23485">
  <title>Is Faster Always Faster?</title>
  <link>https://www.construx.com/10x_Software_Development/Is_Faster_Always_Faster_/?blogid=23485</link>
  <description><![CDATA[<p>A reader of one of my books asked this question:</p>
<p class="indentLeft">What is the impact of an improvement in response time on increased throughput?&#160;I develop many systems, and some have instantaneous response times, some have 10 minute response times, others have 4 or 5 hour response times. What are the thresholds at which response times affect throughput? Clearly going from 30 minutes to 30 seconds would be a big improvement. But would 30 minutes to 20 minutes also be a big improvement? [this has been paraphrased for clarity].</p>
<p>I think the key assumption in this statement is this: "Clearly going from 30 minutes to 30 seconds would be a big improvement." I suspect that sometimes the dynamic is actually the opposite of what the reader implied. With small changes in response time you can probably assume an increase in throughput. If response time improves from 10 seconds to 5 seconds, you can probably assume the users will get more work done.&#160;</p>
<p>But with large changes in response time (in either direction), I believe you will see users adopt offsetting behaviors that can outweigh any differences in response time. For example, years ago when computers were changing from batch processing to interactive processing there were some studies that tried to assess the improvements in productivity attributable to interactive systems. Surprisingly, I don't recall reading any study that found clear evidence of an improvement in productivity in the move from batch processing to interactive processing. Instead, the studies found that programmers had adapted to the long wait times in batch processing environments and filled their wait time with other useful activities.</p>
<p>It's like cooking in a microwave. If I heat up frozen vegetables on the stove, I can just throw them in the pan, turn the stove on low, and go do something else for 10 minutes. If I put them into the microwave for 40 seconds, I might very well stand in front of the microwave and wait for 40 seconds. The food cooks faster with the microwave, but I might actually get more done if I use the stove.</p>
<p>Fred Brooks made a similar point in a keynote address at ICSE '95. He commented that he wasn't sure there had been any real gains in productivity arising from the move from character-based displays to GUIs. He said, "I used to write a draft of a letter and then give it to my secretary to type the final draft. Now I type the draft myself, and then I spend 20 minutes <em>making the fonts look nice</em>!" In other words, more computing power doesn't necessarily mean more productivity.</p>
<p>In the famous IBM Chief Programmer Team project, one programmer wrote 83,000 lines of code in one year. This project took place in 1968. And the code was written in a batch processing environment. And on punch cards. This person had 8 other people arrayed around him in supporting roles, but that still works out to 9,200 lines of code per staff year for a business systems project. At Construx, we see lots of companies writing similar kinds of software that don't achieve 9,200 lines of code per staff year even 40 years later, even in highly interactive environments, even with radically better tool support, even on computers that are <em>millions </em>of times more powerful. Of course we see other companies writing code much faster, though we haven't yet seen any individual programmer who has written 83,000 lines of code in one year, no matter how the team is configured.</p>
<p>Productivity is only partly a function of how fast you go. Highly productive developers need to be aware of the difference between <em>activity </em>and <em>productivity</em> . The fact that you're busy doesn't mean you're getting work done. 10x developers focus on getting the actual work of the project done. They pay close attention to their experience to discern whether the work they're doing actually means more <em>progress -- </em>or just more <em>motion</em>.</p>
<p><strong>References</strong></p>
<p><a target="_blank" href="/Seminars/?dm=1" title="10x Software Engineering Seminar">10x Software Engineering seminar</a></p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-05-21T10:41:00Z</dc:date>
  <content:encoded><![CDATA[<p>A reader of one of my books asked this question:</p>
<p class="indentLeft">What is the impact of an improvement in response time on increased throughput?&#160;I develop many systems, and some have instantaneous response times, some have 10 minute response times, others have 4 or 5 hour response times. What are the thresholds at which response times affect throughput? Clearly going from 30 minutes to 30 seconds would be a big improvement. But would 30 minutes to 20 minutes also be a big improvement? [this has been paraphrased for clarity].</p>
<p>I think the key assumption in this statement is this: "Clearly going from 30 minutes to 30 seconds would be a big improvement." I suspect that sometimes the dynamic is actually the opposite of what the reader implied. With small changes in response time you can probably assume an increase in throughput. If response time improves from 10 seconds to 5 seconds, you can probably assume the users will get more work done.&#160;</p>
<p>But with large changes in response time (in either direction), I believe you will see users adopt offsetting behaviors that can outweigh any differences in response time. For example, years ago when computers were changing from batch processing to interactive processing there were some studies that tried to assess the improvements in productivity attributable to interactive systems. Surprisingly, I don't recall reading any study that found clear evidence of an improvement in productivity in the move from batch processing to interactive processing. Instead, the studies found that programmers had adapted to the long wait times in batch processing environments and filled their wait time with other useful activities.</p>
<p>It's like cooking in a microwave. If I heat up frozen vegetables on the stove, I can just throw them in the pan, turn the stove on low, and go do something else for 10 minutes. If I put them into the microwave for 40 seconds, I might very well stand in front of the microwave and wait for 40 seconds. The food cooks faster with the microwave, but I might actually get more done if I use the stove.</p>
<p>Fred Brooks made a similar point in a keynote address at ICSE '95. He commented that he wasn't sure there had been any real gains in productivity arising from the move from character-based displays to GUIs. He said, "I used to write a draft of a letter and then give it to my secretary to type the final draft. Now I type the draft myself, and then I spend 20 minutes <em>making the fonts look nice</em>!" In other words, more computing power doesn't necessarily mean more productivity.</p>
<p>In the famous IBM Chief Programmer Team project, one programmer wrote 83,000 lines of code in one year. This project took place in 1968. And the code was written in a batch processing environment. And on punch cards. This person had 8 other people arrayed around him in supporting roles, but that still works out to 9,200 lines of code per staff year for a business systems project. At Construx, we see lots of companies writing similar kinds of software that don't achieve 9,200 lines of code per staff year even 40 years later, even in highly interactive environments, even with radically better tool support, even on computers that are <em>millions </em>of times more powerful. Of course we see other companies writing code much faster, though we haven't yet seen any individual programmer who has written 83,000 lines of code in one year, no matter how the team is configured.</p>
<p>Productivity is only partly a function of how fast you go. Highly productive developers need to be aware of the difference between <em>activity </em>and <em>productivity</em> . The fact that you're busy doesn't mean you're getting work done. 10x developers focus on getting the actual work of the project done. They pay close attention to their experience to discern whether the work they're doing actually means more <em>progress -- </em>or just more <em>motion</em>.</p>
<p><strong>References</strong></p>
<p><a target="_blank" href="https://www.construx.com/Seminars/?dm=1" title="10x Software Engineering Seminar">10x Software Engineering seminar</a></p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/Thinking_About_Software_Executives/?blogid=23485">
  <title>Thinking About Software Executives</title>
  <link>https://www.construx.com/10x_Software_Development/Thinking_About_Software_Executives/?blogid=23485</link>
  <description><![CDATA[<p>It's hard to believe it's time to begin thinking about Construx's Executive Summit already. The Summit isn't until October (October 15-17), but there are a few long-lead-time activities. Right now I'm inviting speakers and rounding up discussion moderators. We're also finalizing hotel arrangements. Next I'll define discussion topics, and then comes the event agenda. Once that's done, we'll update the event website with 2007 information so that by May we can begin officially inviting people.            </p>
<p>This event has become one of the real high points of my year, because it gives me a chance to spend 3 intensive days with software development executives. Historically, I've spent much of my time with programmers and managers, but as the years have gone by I've spent an increasing amount of time with directors,VPs, and C-level execs. It's an interesting group, and interacting with them represents a chance to make a real difference in software development practices. The people who attend the Summit are typically the most senior technical executives in their organizations. If they can get some good insights into better software development practices, their whole staff will benefit. But if these people don't "get it," there isn't much hope for their organizations--no one above them is going to get it if they don't.</p>
<p>I've learned that as you move higher in a software organization you find some subtle shifts in viewpoint that go along with the more obvious shifts in responsibility. Lower level managers tend to spend most of their time looking downward, looking after the staff underneath them. By the time you get to the VP level and above, the orientation tends to shift a little upward and a lot outward. Exec certainly can and should care about the staff underneath them, but their day-to-day issues revolve more around peer-level executives (in large organizations), boards of directors, C-level execs, and customers. Top technical executives aren't thinking so much about the health of individual programmers, managers, or even projects. Their focus is on the health of the entire organization. Some of the irrationality that developers perceive at their level can actually look pretty rational when you see it from the executive level (although sometimes, of course, it doesn't!).</p>
<p>There are some subtle shifts in communication style that go along with the shift in viewpoint. In lower level technical ranks, it's common to find skepticism or even cynicism as the day-to-day stock in trade, and I think that goes with the territory. Good programmers have to be paranoid about all the influences that can undermine their work. This can lead to a certain negativity in their communications. It doesn't mean they're negative people; it just means that many programmers have found that the best way to ensure something <em>works</em> is to be hyper-conscious of all the ways it might <em>break</em>. For people who aren't used to that orientation, it can seem pretty negative.</p>
<p>As you move up in an organization, top executives tend to be much more focused on possibilities than on problems, as well as being more concerned with the big picture than with pesky details. Summit attendees nearly all come from software development backgrounds so you might think they would be prone to negative-sounding communications, but as a group they sound much more positive than a group of developers would. I don't know if these software executives learned to change their communication styles somewhere along their paths to executive positions, or if perhaps people with a more positive communication style tend to get selected for executive positions more often. Whatever the reason, the difference in communication style becomes very noticeable once you become sensitized to it. </p>
<p>The event also attracts fascinating people from really interesting companies that collectively are trying just about every different kind of software development practice. I find it really stimulating to be in this environment discussing software issues with people from very different companies who all share the goal of improving their software practices.</p>
<p>This year our speakers at the event are</p>
<ul>
<li>Alistair Cockburn, "The Role of Manager in Modern Agile Projects</li>
<li>Watts Humphrey, "Process Scaling: From Small to Huge"</li>
<li>Tom DeMarco, "Quick or Dead: Organizational Velocity for an Impatient Age"</li>
<li>Howard Look, "From Screenplay to 1.0: Applying Movie-making</li>
<li>Steve McConnell, "The Legacy ofAgile Development" </li>
</ul>
<p>The speakers are really the icing on the cake. The main focus of the event is small group discussions (fewer than 10 people per discussion) in which we talk about enterprise-level software development issues. I learn a lot just by sitting and listening in these discussions.</p>
<p>If you'd like to read more about the event, please visit <a href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">www.construx.com/summit/</a>. Full details will be posted on that site shortly.[Update 5/18/07 -- full details are now posted at <a href="/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">www.construx.com/summit/</a>.]</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-03-26T14:21:00Z</dc:date>
  <content:encoded><![CDATA[<p>It's hard to believe it's time to begin thinking about Construx's Executive Summit already. The Summit isn't until October (October 15-17), but there are a few long-lead-time activities. Right now I'm inviting speakers and rounding up discussion moderators. We're also finalizing hotel arrangements. Next I'll define discussion topics, and then comes the event agenda. Once that's done, we'll update the event website with 2007 information so that by May we can begin officially inviting people.            </p>
<p>This event has become one of the real high points of my year, because it gives me a chance to spend 3 intensive days with software development executives. Historically, I've spent much of my time with programmers and managers, but as the years have gone by I've spent an increasing amount of time with directors,VPs, and C-level execs. It's an interesting group, and interacting with them represents a chance to make a real difference in software development practices. The people who attend the Summit are typically the most senior technical executives in their organizations. If they can get some good insights into better software development practices, their whole staff will benefit. But if these people don't "get it," there isn't much hope for their organizations--no one above them is going to get it if they don't.</p>
<p>I've learned that as you move higher in a software organization you find some subtle shifts in viewpoint that go along with the more obvious shifts in responsibility. Lower level managers tend to spend most of their time looking downward, looking after the staff underneath them. By the time you get to the VP level and above, the orientation tends to shift a little upward and a lot outward. Exec certainly can and should care about the staff underneath them, but their day-to-day issues revolve more around peer-level executives (in large organizations), boards of directors, C-level execs, and customers. Top technical executives aren't thinking so much about the health of individual programmers, managers, or even projects. Their focus is on the health of the entire organization. Some of the irrationality that developers perceive at their level can actually look pretty rational when you see it from the executive level (although sometimes, of course, it doesn't!).</p>
<p>There are some subtle shifts in communication style that go along with the shift in viewpoint. In lower level technical ranks, it's common to find skepticism or even cynicism as the day-to-day stock in trade, and I think that goes with the territory. Good programmers have to be paranoid about all the influences that can undermine their work. This can lead to a certain negativity in their communications. It doesn't mean they're negative people; it just means that many programmers have found that the best way to ensure something <em>works</em> is to be hyper-conscious of all the ways it might <em>break</em>. For people who aren't used to that orientation, it can seem pretty negative.</p>
<p>As you move up in an organization, top executives tend to be much more focused on possibilities than on problems, as well as being more concerned with the big picture than with pesky details. Summit attendees nearly all come from software development backgrounds so you might think they would be prone to negative-sounding communications, but as a group they sound much more positive than a group of developers would. I don't know if these software executives learned to change their communication styles somewhere along their paths to executive positions, or if perhaps people with a more positive communication style tend to get selected for executive positions more often. Whatever the reason, the difference in communication style becomes very noticeable once you become sensitized to it. </p>
<p>The event also attracts fascinating people from really interesting companies that collectively are trying just about every different kind of software development practice. I find it really stimulating to be in this environment discussing software issues with people from very different companies who all share the goal of improving their software practices.</p>
<p>This year our speakers at the event are</p>
<ul>
<li>Alistair Cockburn, "The Role of Manager in Modern Agile Projects</li>
<li>Watts Humphrey, "Process Scaling: From Small to Huge"</li>
<li>Tom DeMarco, "Quick or Dead: Organizational Velocity for an Impatient Age"</li>
<li>Howard Look, "From Screenplay to 1.0: Applying Movie-making</li>
<li>Steve McConnell, "The Legacy ofAgile Development" </li>
</ul>
<p>The speakers are really the icing on the cake. The main focus of the event is small group discussions (fewer than 10 people per discussion) in which we talk about enterprise-level software development issues. I learn a lot just by sitting and listening in these discussions.</p>
<p>If you'd like to read more about the event, please visit <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">www.construx.com/summit/</a>. Full details will be posted on that site shortly.[Update 5/18/07 -- full details are now posted at <a href="https://www.construx.com/Thought_Leadership/Events/Software_Executive_Summit_Page_Layout/?id=14501">www.construx.com/summit/</a>.]</p>]]></content:encoded>
 </item>
 <item rdf:about="/10x_Software_Development/The_Existential_Pleasures_of_Blogging/?blogid=23485">
  <title>The Existential Pleasures of Blogging</title>
  <link>https://www.construx.com/10x_Software_Development/The_Existential_Pleasures_of_Blogging/?blogid=23485</link>
  <description><![CDATA[<p>I've been reluctant to start a blog because the things I would blog about are just not the things that I would normally write about. Sometimes I joke that I have a <em>long </em>attention span. Most people's issue is that they can't focus for a long time; they're easily distracted and can't complete large tasks. That isn't my issue. My issue is not being able to focus for a <em>short </em>time. Sometimes I really need to dive deep and simply can't bring myself to work on the non-deep tasks. If the task is three months long and really meaty, I can do it. If it's 15 minutes long and superficial, I can't even start it. Thus the joke about the a long attention span.</p>
<p>Blogging seems to me to be quintessentially a short attention span task. That's not the greatest match for my interest in software development topics. But it isn't a bad match for my interest in recreational topics. And I think I can bring myself to focus on software development in bite-size chunks, at least from time to time. Consequently I've set up two blogs, one for software development and one for everything else. This blog, <a href="/Blogs/10x_Software_Development/?id=15082">10x Software Development</a>, will focus on leading software development practices. My other blog, <a href="/Blogs/Waxing_Philosophical/?id=15168">Waxing Philosophical</a>, will focus on more personal topics.</p>
<p>Cheers,<br />Steve McConnell</p>]]></description>
  <dc:creator>stevemcc</dc:creator>
  <dc:date>2007-03-24T13:57:00Z</dc:date>
  <content:encoded><![CDATA[<p>I've been reluctant to start a blog because the things I would blog about are just not the things that I would normally write about. Sometimes I joke that I have a <em>long </em>attention span. Most people's issue is that they can't focus for a long time; they're easily distracted and can't complete large tasks. That isn't my issue. My issue is not being able to focus for a <em>short </em>time. Sometimes I really need to dive deep and simply can't bring myself to work on the non-deep tasks. If the task is three months long and really meaty, I can do it. If it's 15 minutes long and superficial, I can't even start it. Thus the joke about the a long attention span.</p>
<p>Blogging seems to me to be quintessentially a short attention span task. That's not the greatest match for my interest in software development topics. But it isn't a bad match for my interest in recreational topics. And I think I can bring myself to focus on software development in bite-size chunks, at least from time to time. Consequently I've set up two blogs, one for software development and one for everything else. This blog, <a href="https://www.construx.com/Blogs/10x_Software_Development/?id=15082">10x Software Development</a>, will focus on leading software development practices. My other blog, <a href="https://www.construx.com/Blogs/Waxing_Philosophical/?id=15168">Waxing Philosophical</a>, will focus on more personal topics.</p>
<p>Cheers,<br />Steve McConnell</p>]]></content:encoded>
 </item>
</rdf:RDF>

