<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
 
 <title>Kevin Hickey's Blog</title>
 <link href="http://kevinmhickey.github.com/atom.xml" rel="self"/>
 <link href="http://kevinmhickey.github.com/"/>
 <updated>2016-03-22T01:51:01+00:00</updated>
 <id>http://kevinmhickey.github.com/</id>
 <author>
   <name>Kevin Hickey</name>
   <email>kevin@kevinmhickey.com</email>
 </author>

 
 <entry>
   <title>What is a story point?</title>
   <link href="http://kevinmhickey.github.com/2013/02/10/what-is-a-story-point"/>
   <updated>2013-02-10T00:00:00+00:00</updated>
   <id>http://kevinmhickey.github.com/2013/02/10/what-is-a-story-point</id>
   <content type="html">
&lt;p&gt;Remember, points aren’t hours or days, they’re relative complexity.&lt;/p&gt;

&lt;p&gt;Relative to what?&lt;/p&gt;

&lt;p&gt;Relative to the baseline stories that our team has defined.&lt;/p&gt;

&lt;p&gt;But what do you mean by complexity?&lt;/p&gt;

&lt;p&gt;Complexity is the amount of work you have to do to complete the story.&lt;/p&gt;

&lt;p&gt;So it’s the relative amount of time that the team is going to spend on this story?&lt;/p&gt;

&lt;p&gt;Yes, but as it relates to the other stories we have experience with.&lt;/p&gt;

&lt;p&gt;Ok - two days.&lt;/p&gt;

&lt;p&gt;No, no, no points are relative complexity, not days!&lt;/p&gt;

&lt;p&gt;We’ve all heard this in Agile estimation meetings and iteration planning meetings since the beginning of Agile.  To get away from the outdated notion of hours and tasks we moved to points and stories.  But are they really any different?  Could we make them different?  Why would we want to?  Isn’t the current method working just fine?&lt;/p&gt;

&lt;p&gt;Let me answer the last question first: Sure it is.  Agile projects are successful every day, moreso than their waterfall uncles of the past.  But at the same time, better than waterfall doesn’t seem to be that high of a bar.  I wonder if we can do better than Agile.&lt;/p&gt;

&lt;p&gt;###What story points do well&lt;/p&gt;

&lt;p&gt;###Where story points fall short&lt;/p&gt;

&lt;p&gt;###From the customer’s point of view…&lt;/p&gt;

</content>
 </entry>
 
 <entry>
   <title>How does my Android app draw to the screen?</title>
   <link href="http://kevinmhickey.github.com/android/2013/02/02/how-does-my-android-app-draw-to-the-screen"/>
   <updated>2013-02-02T00:00:00+00:00</updated>
   <id>http://kevinmhickey.github.com/android/2013/02/02/how-does-my-android-app-draw-to-the-screen</id>
   <content type="html">
&lt;h3 id=&quot;a-little-bit-of-background&quot;&gt;A little bit of background&lt;/h3&gt;
&lt;p&gt;A short time ago (which feels like another lifetime these days) I spent a lot of time porting the Android platform to new systems.  Specifically, I worked for the manufacturer of a MIPS based SOC that was completely unsupported by Android.  Over the course of a few years, I ported Cupcake (1.5), Eclair (2.1), Froyo (2.2) and Gingerbread (2.3) and had a good look at the internals of the OS.  I have not spent a lot of time with Honeycomb, Ice Cream Sandwich or Jellybean so I will not be discussing them here.  I have heard that some things were improved in Jellybean but I would guess that the fundamentals remain the same.&lt;/p&gt;

&lt;p&gt;I was recently discussing different ways that applications and platforms draw to the screen with &lt;a href=&quot;http://paulhammant.com&quot;&gt;Paul Hammant&lt;/a&gt;.  In that &lt;a href=&quot;http://paulhammant.com/2013/02/04/the-importance-of-the-dom&quot;&gt;discusson&lt;/a&gt; we touched on a few platforms including Android.  The implementation of the UI toolkit and rendering engine in Android is fairly unique and, having spent some quality time with it, I decided to elaborate.&lt;/p&gt;

&lt;h3 id=&quot;why-not-swing&quot;&gt;Why not Swing?&lt;/h3&gt;
&lt;p&gt;When Google developed Android, they made two big design decisions.  First, chose to create the JVM and libraries internally.  Second, they would license as much of the OS as they could under the &lt;a href=&quot;http://en.wikipedia.org/wiki/Apache_license&quot;&gt;Apache license&lt;/a&gt;.  This precluded them from using any existing Java libraries, including Swing, and led to a new graphics library.  The main reason for these decisions were to encourage vendors to use the platform without fear of the GPL requiring them to release their source code.  I imagine that they were also interested in some level of creative control and wanted to enhance the GUI toolkit in ways that Sun (later Oracle) may not have supported.&lt;/p&gt;

&lt;h3 id=&quot;rectangles-to-triangles&quot;&gt;Rectangles to Triangles&lt;/h3&gt;
&lt;p&gt;In addition to a new GUI toolkit, Google’s Android engineers created their own rendering engine for it.  Instead of any of the standard X Window implementations or one of the new upstarts, they created something unique.  Each displayable application is given a 2D surface.  Once the application paints its widgets and graphics to this surface, it is passed to an OpenGL ES rendering engine to be composited onto the screen.  Since any surface may be transparent, all viewable surfaces must be maintained in memory and composited on every change.  This includes the launcher application and the single running full-screen application.&lt;/p&gt;

&lt;p&gt;While this seems like an elegant and flexible implementation it is, in my opinion, the primary cause of the perceived sluggishness and short battery life of Android devices.  The original launcher application used by Cupcake through Froyo had no fewer than twelve layers that had to be composited.  The often-derided on-screen keyboard was made up of at least four.  Factor in an application and the system may have to render up to 20 layers for a keypress!  This approach would work well on a desktop where power is not a concern, but in the mobile space computation costs battery and hardware is limited in capability.&lt;/p&gt;

&lt;h3 id=&quot;a-little-history&quot;&gt;A little history&lt;/h3&gt;
&lt;p&gt;When Android Cupcake was launched, most mobile devices did not have OpenGL hardware available.  Google provided an OpenGL ES software emulation package for those that did not.  While functional, my first port was to a CPU that lacked both OpenGL hardware and a floating-point unit.  My customer wanted to use Android on a digital picture frame with a 1024x768 screen.  Once running, simple operations like opening the application “drawer” were a performance disaster.  My Windows CE counterparts had a few good laughs at my fancy new OS’s expense.  I ended up having to disable the alpha-blending algorithms by shorting them to full opacity just to get a demo running.&lt;/p&gt;

&lt;p&gt;Android Eclair represented a major shift.  Android now required OpenGL hardware; the emulation library was removed.  To their credit, the Android developers made it very easy to use a pre-built OpenGL ES library.  This prevented many devices on the market from upgrading to Eclair, leading to the first forced fragmentation of the Android handset market.  I would have preferred an alternate layering implementation that did not require hardware acceleration for legacy or low-end devices and an advanced library for those devices that could support it.&lt;/p&gt;

&lt;h3 id=&quot;a-better-future&quot;&gt;A better future…?&lt;/h3&gt;
&lt;p&gt;It may be too late for Android to address this problem going forward.  Most of the current hardware platforms have adequate rendering power to overcome the shortfalls in the software design.  It is unlikely that the effort will be spent to optimize a part of the operating system that is not a roadblock.  I believe that software improvement could increase device runtime and improve the user experience.   I would prefer a more holistic approach that pre-determines the relevant surfaces before any rendering anything.  This is difficult to achieve in the application-driven rendering model.  The surfaces have little to no knowledge of their peers or parents, preventing any collaboration or optimization.  A wholesale redesign of the rendering engine might be required for any improvement.  It appears that Microsoft may be heading to a more unified approach with Metro by using a DOM, as mentioned in &lt;a href=&quot;http://paulhammant.com/2013/02/04/the-importance-of-the-dom&quot;&gt;Paul’s post&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Android is an amazing popular and powerful platform whose success cannot be denied.  In their attempt to keep a favorable license in place, Google engineers re-solved the UI toolkit and rendering problem.  It offered the ability to build a beautiful UI including transparency and layering, but missed the mark on efficiency and scalability for a mobile platform.  Hardware has caught up to the software making it unlikely to change but it will always leave me wondering what could have been.&lt;/p&gt;
</content>
 </entry>
 
 <entry>
   <title>How big should a story be?</title>
   <link href="http://kevinmhickey.github.com/2013/01/27/how-big-should-a-story-be"/>
   <updated>2013-01-27T00:00:00+00:00</updated>
   <id>http://kevinmhickey.github.com/2013/01/27/how-big-should-a-story-be</id>
   <content type="html">
&lt;p&gt;###Intro
Stories are the fundamental unit of work for an agile team.  For a project to be successful, it is important that stories are well written, sliced properly and correctly sized.  Story slicing and sizing are often difficult to get right, especially for those new to agile.&lt;/p&gt;

&lt;p&gt;So how big should a story be?&lt;br /&gt;
Simply, as small as it can be, no bigger nor smaller.&lt;/p&gt;

&lt;p&gt;###What does that mean?&lt;br /&gt;
It means that a story should cover exactly one piece of functionality that the customer would find useful.  This piece of functionality should be independently implementable, testable and deployable.  It should be something that, when deployed, will give the customer some value.&lt;/p&gt;

&lt;p&gt;###Counterarguments&lt;/p&gt;

&lt;p&gt;###Single Responsibility Principle&lt;/p&gt;

&lt;p&gt;###Continuous feedback&lt;/p&gt;

&lt;p&gt;###Pipelining the team&lt;/p&gt;
</content>
 </entry>
 
 <entry>
   <title>Why do we write automated tests?</title>
   <link href="http://kevinmhickey.github.com/testing/2013/01/19/why-do-we-write-automated-tests"/>
   <updated>2013-01-19T00:00:00+00:00</updated>
   <id>http://kevinmhickey.github.com/testing/2013/01/19/why-do-we-write-automated-tests</id>
   <content type="html">
&lt;h3 id=&quot;intro&quot;&gt;Intro&lt;/h3&gt;
&lt;p&gt;Automated software testing is important.  It is what gives us the confidence to write code, to change code and to understand code. Thinking about all three of these factors for every test will lead to better tests and better code.&lt;/p&gt;

&lt;h3 id=&quot;writing-code&quot;&gt;Writing code&lt;/h3&gt;
&lt;p&gt;The most obvious reason to write automated tests is to validate behavior during development.  From mundane logic to complex algorithms, a fast-running automated test focused on a specific behavior can prevent the simple mistakes that take a long time to debug.  I practice Test Driven Development (TDD) when developing new code.  At a high level, TDD focuses on writing the test before the production code, thus guaranteeing a high level of test coverage and low bug rate.  When done properly, TDD will drastically reduce debug time and improve confidence in estimation and delivery.&lt;/p&gt;

&lt;h3 id=&quot;changing-code&quot;&gt;Changing code&lt;/h3&gt;
&lt;p&gt;One of the biggest problems with legacy codebases that lack automated tests is that change incurs a high level of risk.  There is no way to know how a given code change might unexpectedly change the behavior of the system. Worse yet, there is no way to know when you have adequately validated a change.  Having an automated test suite mitigates this risk by providing a safety net against unexpected change.  This is especially important when refactoring as it guarantees that while changing implementation the behavior remains the same.&lt;/p&gt;

&lt;h3 id=&quot;understanding-code&quot;&gt;Understanding code&lt;/h3&gt;
&lt;p&gt;The third and least obvious reason for testing is documentation of intent.  The tests serve as a living document to other developers letting them know what behavior you expect the code to have.  Using tests as documentation is better than a design document or code comment for two reasons.  First, it is much less tedious to write because code is more expressive to developers than prose.  Second, since it is compliable and executable it must be kept up to date or the test suite fails.  Traditional documentation, when written at all, is rarely maintained and often out of date.  It also usually documents what the code does but not not why it does it.  Tests as documentation do both.&lt;/p&gt;

&lt;p&gt;One critical practice to tests serving as documentation is naming.  Tests should be named for exactly what they do, not how they do it.  For example:
&lt;code class=&quot;highlighter-rouge&quot;&gt;shouldReturnCorrectSkyColorFromGetSkyColor()&lt;/code&gt; is a bad test name for a few reasons.  First, it says what the code does, not what behavior it should exhibit.  Avoid words like “return” or “call” or other programming jargon.  Second it is too generic.  What is the “correct sky color”?  How does the reader know it is correct?  Finally, it does not describe the behavioral precondition at all.  Tests should be about setting up a situation, invoking part of the system, and verifying a result.  A much better test name would be &lt;code class=&quot;highlighter-rouge&quot;&gt;shouldIndicateSkyIsBlueWhenNoCloudsPresent&lt;/code&gt;.  Given this test, I would probably expect to see &lt;code class=&quot;highlighter-rouge&quot;&gt;shouldIndicateSkyIsGreenWhenTornadoImminent&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;shouldIndicateSkyIsGrayWhenCumulusCloudsPresent&lt;/code&gt;.  These tests clearly specify what the preconditions are.  They also are specific as to the result value for those preconditions.  They do not mention code jargon or method names.  It may be that the color is indicated by a return value or by setting a member variable or calling a callback method.  The content of the test will tell you how it is done.&lt;/p&gt;

&lt;h3 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;When writing tests, keep in mind that they are more than just making sure you get the logic right.  Tests are your safety net for future change and documentation for you and other developers on your team.  Take the time to test correctly and you will save debug and bugfix time in your current iteration and those to follow.&lt;/p&gt;
</content>
 </entry>
 
 <entry>
   <title>Why?  DRY!</title>
   <link href="http://kevinmhickey.github.com/2013/01/12/why-dry"/>
   <updated>2013-01-12T00:00:00+00:00</updated>
   <id>http://kevinmhickey.github.com/2013/01/12/why--dry</id>
   <content type="html">
&lt;p&gt;I like to talk.  A lot.  And about just about everything.  As my career develops, I find myself coaching more and that means repeating speeches about the subjects that matter the most to me.  Since one of my favorite software engineering practices is Don’t Repeat Yourself (or DRY), I thought I should put it to work in my life as well as my code.  I started this blog to have somewhere to point people and avoid repeating myself.&lt;/p&gt;

&lt;p&gt;The basic principle of DRY in software is that a given piece of information should be reperesented only once in the system.  The most obvious use of this rule is that “copy and paste” should be avoided in favor of extracting methods and classes.  These duplications are rampant in legacy codebases and those developed on short schedules.  The reason to avoid duplication is that it makes change difficult and risky.  For example, a buggy piece of code is replicated in three places in a codebase and one of the instances is found by QA.  Now there are really three bugs in the application but only one is known.  The developer fixing the detected bug will either fix just one, leaving bugs in the system, or notice the others and have to fix them as well.  Futhermore, refactoring code becomes more difficult with duplication.  Renaming a variable or method results in more touches.&lt;/p&gt;

&lt;p&gt;Another dimension to DRY crosses the boundaries of code, documentation and tests.  I believe that one of the purposes of a comprehensive automated test suite is to serve as documentation of behavior and developer intent.  Often, these same ideas are also contained in a design document or some other non-compiled written piece.  The issue here is similar to the copy/paste problem in that one version may be updated and the others forgotten.  Most often, the code is updated and the documents are not.  When tests are used in place of documentation, they must be kept in sync with the code or they do not pass.  A problem that arises with this technique is that customers and other non-technical folks don’t like to read tests.  Two solutions I’ve used to solve this are technologies like JBehave or scripts to parse tests and generate documentation.&lt;/p&gt;

&lt;p&gt;DRY is not just about text replication.  The really nefarious DRY violations are bigger and harder and very expensive to fix.  These are the design duplications and data duplications.  Design duplications show up when similar (or even the same) concepts are implemented differently within an application.  For example, in a data driven application a view that shows a specific table or logical piece of data should be implemented only once.  It can be parameterized if different instances need slightly different behavior, but the base behavior should not be replicated.  To do so requires that any change (for example adding a column) requires many touches to the software, increasing development time, increasing the risk for bugs and boring developers.  Data duplications are similar.  With some exceptions for performnace, a piece of data should be represented exactly once in the database.  Aggregations or calculations should be performed as needed by the applicaton.  To do otherwise again opens the opportunity for bugs and requires that the application know when to refresh the stored calculation.  This often leads to bugs or other coding style (e.g. Single Responsibility Principle) violations.&lt;/p&gt;

&lt;p&gt;To sum up, and at the risk of being a hypocrite, Don’t Repeat Yourself!  Keep duplication out of your code, out of your communication, out of your data and out of your design.  You’ll save effort, have fewer bugs, and have more time to do interesting work!&lt;/p&gt;
</content>
 </entry>
 
 <entry>
   <title>Types of change</title>
   <link href="http://kevinmhickey.github.com/2013/01/12/types-of-change"/>
   <updated>2013-01-12T00:00:00+00:00</updated>
   <id>http://kevinmhickey.github.com/2013/01/12/types-of-change</id>
   <content type="html">
&lt;p&gt;There are two main kinds of change.  Additive change and changitive change.  Additive change is much less risky.  Adding an API, method, or class does not impact existing code.  Changitive change carries much more risk.  Changing an existing API or behavior may have unintended consequences, especially in a legacy codebase.  Without comprehensive automated tests as a shield, changitive change should only be undertaken with great care.&lt;/p&gt;
</content>
 </entry>
 
 
</feed>
