Pilky.me 2013-10-15T00:00:00Z Copyright (c) 2010-2014, Martin Pilkington Pilky.me tag:pilky.me,2013:10:15 Using Storyboards tag:pilky.me,2013:view/37 2013-10-15T00:00:00Z 2013-10-15T00:00:00Z Martin Pilkington pilky@mcubedsw.com Storyboards seem to be a big point of contention in iOS development. Some see them as wonderful additions, some as a poorly designed and pointless hindrance that Apple seems intent on force feeding us. There is one thing that’s consistent though: almost nobody is using them right.

That’s a bold statement to make. It's based on the many conversations I've had with people, and the many tweets and blog posts on the issue. One of the key things I see is a rather innocuous question: NIBs or Storyboards? This highlights a massive misunderstanding of what Storyboards offer us, as it pits them as a replacement to NIBs. In reality they can be easily used to complement NIBs.

In this post I want to show how I’ve been using Storyboards and NIBs, together with a few ideas I’ve been throwing around for structuring apps. It may not represent the best way of doing things, there are even some parts I’m still unsure of myself. Hopefully it will give you some insight in how to get more out of Storyboards and NIBs, or at least provide a starting point for debate.

The App

I’ll be referencing a sample project that I’ve posted on github. This won’t be a step-by-step guide on how to build the app, but more a post highlighting interesting aspects.

The app is quite basic. It contains a list of entries and tapping on one takes you to a details screen. From this screen you can also few more details. There is also a “settings” screen, which actually just shows a similar list. This app allows me to show some very important concepts though.

Splitting Up Your App

The biggest problem I see with people using Storyboards, is they throw all the UI for their app into a single Storyboard. This is equivalent to throwing all the code for the same app into a single file. It becomes hard to understand, hard to use and almost impossible to work with on a team. If we know not to do this with code, why do we do it with Storyboards?

It’s partly Apple’s fault. They haven’t really shown us how to programatically interact with a storyboard, instead focussing on how you can easily add scenes and segues. They’ve also promoted the single Storyboard idea, though that may be a side effect of the simplicity of the projects they show. First we'll look at conceptual side of splitting up our app into several Storyboards, then in the next section we’ll see how we can hook things up with code.

We want to try and find clean breaks in our app, where we can separate out functionality. A good sign for this is where we do some significant context-changing transition, such as a modal transition. If we take the camera app, we have two contexts: taking photos and viewing/editing photos. In iBooks we have browsing books, we have viewing a book and we have viewing the iBooks Store. In the sample app we have the Settings and the Entries context.

Even though there is some degree of hierarchy linking these, it is much weaker than the hierarchy linking the views within them. When we see these contexts, it’s a sign that we can split things up into a separate Storyboard. It’s rare that we work on UI impacting features that span contexts, so by dividing our Storyboards along these lines we make it easier for us to focus, as all the irrelevant stuff is hidden away in another file. It also reduces the risk of merge conflicts, especially on smaller teams where it’s more likely you’ll only have a single person working on any one context at any time.


So we’ve got our separate Storyboards and we’ve linked the scenes within them, but how do we replace the segues we had before. We can’t link between Storyboards in Xcode. The answer comes in one of the simplest classes in UIKit: UIStoryboard. It only contains 3 methods. The first +storyboardWithName:bundle: creates a Storyboard object from file with the supplied name. The other two allow us to create view controllers from our Storyboard.

A Storyboard gives us two ways to reference a view controller. The first is to assign it as the initial view controller. This is shown in the editor as an arrow pointing from nothing towards a scene. The other way is to give our scenes identifiers. These are strings that we can pass into -instantiateViewControllerWithIdentifier: to generate a view controller object from that scene.

There is one more useful method that was added to UIViewController, which allows for some really powerful concepts. It is the storyboard property. This holds a reference to the UIStoryboard object that represents the Storyboard the view controller was created from. As we usually have a single root object in a Storyboard (having multiple is a sign that maybe you need to split it up), this means we only need to create a UIStoryboard object once.

Showing a scene from a Storyboard

We’re going to take the reins from UIKit and handle setting up the initial Storyboard ourselves. We’ll see this in two places. The first is -[M3AppController launchInWindow:], where we set the root view controller of a window and make that window visible. There is nothing really different here to what you may have seen before (except that it’s usually found in the app delegate, but we’ll explain why it’s here later in the post).

You’ll see that the root view controller is coming from -[M3AppFactory entriesNavigationController]. This is just a lazy loading getter, but it is doing its loading in a slightly different way. We’re not calling alloc/init on a class. Instead we’re creating a storyboard (line 26) and setting our view controller to be the initial view controller (line 27), in this case the navigation controller for our entries context.

We can also transition to another scene quite easily. The simplest example is in -[M3EntryDetailsViewController showMoreDetails:]. All we do here is instantiate our view controller, set the data object on it, then push it onto our navigation controller.

You may be asking why I bothered using Storyboards rather than just instantiating the class itself. The reason is loosely coupled code is good and Storyboards help us to achieve that. I could potentially give a different scene the “moreDetails” name. As long as this scene also responds to -setEntry: then the code will work as is. I’m putting the details of the high level app flow from my view controllers and into the Storyboard. I could even potentially completely swap out the Storyboard, as the class only references the storyboard property than any specific Storyboard.


If you have looked at Entries.storyboard, you will have noticed that I have connected my scenes up with Segues, but I haven’t actually used them for any of my transitions. I just want to cover the pros and cons of Segues and why I’m currently doing them the way I am.

The big problem with Segues, is they force you to funnel all your logic through a single method, -prepareForSegue:. It is very much the same problem faced with KVO. You end up with a method that has a series of if statements that work out the correct code to call based on a context. It feels awkward and not very OOP. This is why I tend to avoid them, instead favouring handling the transitions myself. If they operated in more of a target action system I’d be much more inclined to use Segues.

There are good reasons to use Segues though. The method we’ve been looking at makes some assumptions about the app structure. It assumes this view controller will be displayed in a navigation controller and that the more details view controller should also be shown in that navigation controller. Just as Storyboards allow us to pull the details of *what* view controller we’re moving to, Segues allow us to pull out the details of *how* we move to that controller.

So why connect my scenes up with Segues in the Storyboard if I’m not going to use them? Partly because my Storyboard act as a visual representation of the flow through the app and the arrows help with that, but mostly because it makes configuring navigation items easier in the editor if you hook them up.


You may be wondering how NIBs fit into this world. For all the benefits of Storyboards in structuring our apps, they kind of suck at layout. Sometimes you have views where you have a complex layout and/or one that doesn’t neatly fit the confines of a full screen view controller. You could handle it all in code, but that’s a lot of writing, testing and debugging that needs doing. It’s always been far simpler to lay it out in a NIB.

As with UIStoryboard, a lot of developers aren’t fully aware of how to interact with NIBs in code. It’s understandable, as we often have it handled for us by the likes of UIViewController. The main class we use for working with NIBs is the UINib class. Like UIStoryboard, it is incredibly simple, with two methods for creating a UINib object and one for unpacking its objects.

We use a NIB in the sample project for the details screen. This is a scroll view that needs to contain a layout. In this case the layout is quite simple, but if we had a more complex data structure it could require a more complex view to display. We lay this out in EntryDetailsContentView.xib. This contains a view of type M3EntryDetailsContentView and has a File’s Owner of type M3EntryDetailsViewController. However, instead of making the view in the NIB the view controller’s view, we have hooked it up to the contentView outlet.

The magic is done in M3EntryDetailsViewController.m on lines 25-26. Here, we load the NIB and then we instantiate it, passing ourself in as the owner. What this does is load the NIB and set up all the outlets. By passing ourself in as the owner, all the outlets and actions connected to the File’s Owner in the NIB are now connected to us. This means that our contentView property is now filled. We then just add the content view to the scroll view and set up its constraints.

The CEO Object

We’ve spent some time splitting up our app and nicely partitioning things. Unfortunately, just as two objects may want to talk, two contexts within the app may want to talk (we can almost think of each partitioned context as an object in its own right). There are already various solutions we could try. We could pass lots of references around so the objects can talk directly. We could fire off notifications. We could throw it all in the app delegate and call [UIApplication sharedApplication].delegate.

I’ve started doing something that’s a bit of a combination of these, which offers less housekeeping than passing lots of objects around, less indirection than notifications (which means more help from Xcode) and less ickiness than throwing it all in the app delegate. The solution is two classes: M3AppController and M3AppFactory.

M3AppFactory is rather simple. It creates all the core objects for the app. In a simple app like this I throw all the creation in here, but in more complex apps I’ve created separate factories for the frontend and backend (the former will use the latter). You could also split up the factories by function if you wished. The idea though is to have a central location which all the key longterm objects are created.

M3AppController directs all the flow in the app. At times I’ve felt it risks becoming a god object. It can end up controlling the display of each context, as well as various app level functions such as sending analytics or presenting messages, leading it to grow rather large in line count. However, it still maintains its purpose of managing the flow between the different contexts of the app.

Rather than a god object, it feels a bit more like a CEO object. The CEO is in overall control of the company and oversees various departments (contexts). The CEO needs to orchestrate the interaction between these departments in order to get the best out of them.

Using The App Controller

The key point of the App Controller is that it is passed into all the main objects in the app (this is done in the App Factory, and is a key reason to have that class), so it can be accessed via a property. We could create a +sharedController method, but the property way of working allows for looser coupling and easier testability.

Lets look at how we manage the main contexts in the app. We saw earlier that we set up the main Entries context in -[M3AppController launchInWindow:]. This is in here, rather than in the app delegate, because we want to keep the app delegate lean. It should focus purely on handling the delegate methods of UIApplication.

We treat the Entries context as sort of a master context from which all other contexts appear, but this is purely due to how this app is structured. Because of this, we don’t have any means to show or hide the context beyond launching the app, as all other contexts will appear on top of it.

The other context is the Setting context. Here we have methods to both show and hide the settings. If you look at -[M3AppController showSettings] you’ll see we’re simply presenting the settings controller on the entries controller. The hide method below just dismisses it. These methods are called from inside the contexts. The show method is called from -[M3EntriesViewController showSettings:], where in response to an action being invoked from the UI, we get our App Controller object and tell it to showSettings. Similarly in -[M3SettingsViewController done:] we call hideSettings.

Communicating Across Contexts

The Settings context isn’t really for dealing with settings. In fact it mostly emulates the Entries context by display a list of entries. If you tap on one of these, it will display that entry and then hide the settings screen.

Displaying an entry is an interesting case, as it requires one context to control a transition within another context (rather than just transitioning between contexts). In -[M3SettingsViewController tableView:didSelectRowAtIndexPath:] we find the relevant M3Entry object, then tell the app controller to show details for that entry and hide the settings. It’s very simple to read and use (more so than passing round a navigation controller, creating the relevant view controller and pushing it on, all from the settings view controller).

Looking at -[M3AppController showDetailsForEntry:] we can see there’s nothing complex about this transition. We are asking the factory for a details view controller for this entry (which simply instantiates a details view controller from the Storyboard) and pushing it onto the entries navigation controller.


If you’ve only got one thing from this post, I hope it’s the knowledge of how to work with Storyboards and NIBs in code. My main aim was to get people breaking up their god Storyboards into smaller, more manageable units.

Everything else is less concrete and certain. They are ideas I’ve been playing with and, while they’re far from perfect, they help make the flow of my app easier to understand and reason about, while also helping to separate each context out. These ideas grew out of my usage of multiple Storyboards, which made me think more about these contexts as distinct units. Maybe it will encourage you to try the same approach, or maybe build upon it to make something even better than I have now.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Optimising Autolayout tag:pilky.me,2013:view/36 2013-05-28T00:00:00Z 2013-05-28T00:00:00Z Martin Pilkington pilky@mcubedsw.com There was a post by Florian Kugler going round recently about Autolayout Performance on iOS. It looked at how much time it takes Autolayout to add views, and how this increases with the number of views. The post, while providing very useful information, didn't seem to best represent real world performance of Autolayout, instead showing a set of worst-case scenarios.

In this post, I want to look more at why Florian got the results he did. My hope is to highlight some bad practices one can have with Autolayout, and look at what makes both the statements that "Autolayout takes several seconds to layout a few 100 views" and "Autolayout can layout a few 100 views very quickly" true, despite their seemingly contradictory nature.


First, I want to cover exactly how I measured the numbers in question. I believe this is slightly different to how Florian measured them, but shows the layout much more closely. I started with the original source project and made modifications to test some variations. You can find my version of the project on GitHub.

To measure timings, I ran the app in Instruments using the Time Profiler template. I did not feel the need to restart the app each time as there is little-to-no caching going on. I ran each test 3 times in succession, clearing the views between each test. Afterwards, in Instruments, I focused on the region of the sample in which each test was run. To get the time a layout took, I used the time Instruments say its layout method took. I calculated the average of the 3 runs and used that to provide the data for this post.

Recreating The Initial Results

As my methods were slightly different, and I am using a different device to Florian (a 3rd Gen iPad), I first set out to test the same things he did. His project tested 3 ways of laying out:

  • A flat hierarchy of views, absolutely positioned in the root view
  • A flat hierarchy of views, relatively positioned to each other
  • A nested hierarchy of views, relatively positioned to each other

He also did both the flat and nested hierarchies by simply setting the frame. Below is the graph showing what I got for the flat hierarchy.

Flat Layout Graph 1

If you compare to Florian's post, you'll see that this looks rather different. In Florian's graph, the green line is worse than the orange line, but they are both fairly close. In my graph, the orange line is a lot worse (as an example, for 600 views, Florian got 5 seconds, whereas I got closer to 7.5 seconds, despite having a faster iPad), but the green line is a lot better (for 600 views I got around 2.5 seconds vs Florian's 6-7 seconds).

I'm putting this difference down to the difference in measurement. As mentioned earlier, I used the timing of method creating the constraints. In order to do this, I invoked the -layoutIfNeeded method on the root view at the end of each method. This forces Autolayout to run immediately, rather than deferring until the end of the run loop, meaning that Instruments counts the performance on the method creating the constraints, rather than a system method.

I suspect Florian was measuring the overall time the CPU was working, but this isn't necessarily all due to Autolayout. I believe my way is more indicative of exactly what Autolayout is doing, but Florian's is more indicative of how long the app may be unresponsive for. Regardless, the actual values don't matter as much as the curve, and any relative improvements we can find

Nested Layout Graph 1

The nested layout graph has fewer differences with the original tests. The curve is pretty much identical. The only difference is that my times are slightly faster, which is to be expected when running on a faster device.

The Power Of Locality

One thing I noticed about the original code was that all the constraints were being added on the root view. In some cases this is required, as the constraint references the the root view. All the views a constraint references must be in the subtree of the view it is being added to. As such, you could just throw every constraint in the UI into the app's root view.

You don't want do that though, for several reasons. The most obvious is that it's a lot simpler to understand code when it is adding constraints locally. The other is that it dramatically affects performance.

Let's look at our flat layout. While the position constraints need to be on the root view, the size constraints don't. I changed the code so that the size constraints were being added to the subview instead, and got the following results:

Flat Layout Graph 2

The purple line is the relatively layout, with the size constraints being as local as possible, and the red is the equivalent line for the absolute layout. As you can see, we're getting some performance improvements. I'm not 100% sure, but my educated guess is that this is because we are reducing the size of the calculation on the root view. We are letting Autolayout perform part of the layout as a lot of small calculations, rather than calculating the whole thing in one big blob.

These gains are relatively small though. The more complex calculations are still all clustered together and are as local as possible. Let's look at the nested layout then, as all the constraints relating to a view can be put in the immediate superview, dramatically increasing the locality. The graph below shows just how significant an improvement this gives.

Nested Layout Graph 2

To give actual numbers, the 200 view layout took 22.75 seconds when putting all constraints in the root view, but only 2.00 seconds when putting them on the immediate superview. Putting the same constraints on the root view leads to the code running over 11 times slower. The lesson of this should be obvious. When working with Autolayout, put all your constraints as locally as possible.

Modifying Existing View Hierarchies

Florian mentioned that constraint satisfaction problems have a polynomial complexity. We can see this in the curves of the graphs above. However, the tests above are largely unrepresentative of the real-world use of Autolayout. Knowing how fast Autolayout is at throwing 1000 views into a parent view is useful, much as knowing how fast NSArray is at adding millions of objects. However, the majority of NSArrays created rarely hold more than a few 100 items, with many holding less than 10. Similarly, it's rare for an individual view to hold more than 40-50 subviews, or to have a view hierarchy more than 20-30 views deep (I suspect those values are wildly overestimated).

The more realistic scenario is having a view hierarchy where we want to move some views around, or to add a few additional views. I conducted some more tests based on those above. Taking both the flat (absolute, not relative) and nested layouts, at the sizes used above, I then calculated how long it took to move all the views and to add 10 additional views.

As we can see from the graph below, even at up to 1000 views, adding an additional 10 views to the flat, absolutely positioned layout is largely linear. This is because we are only referencing the root view, and so all the other views don't need to be recalculated. If we inserted a view into the middle of the chain of relatively positioned views, it would likely not be quite so fast.

Similarly, moving is largely linear, though it does spike at 1000 views. Again, this is because the constraints for one view do not depend on any other sibling view.

Flat Layout Graph Adding & Moving Views

If we look at the nested layout, we find that moving is also seemingly linear. While it looks a lot shallower than the flat hierarchy, it is merely a trick of the graph, they are largely on the same line. When it comes to adding, we do see a curve, but we are adding an additional 10 layers of view hierarchy here, each dependent on the previous.

Nested Layout Graph Adding & Moving Views

The thing to note with all of these, is how fast they are compared to the previous tests. To add 1000 absolutely positioned views took 6.6 seconds, but to add an additional 10 took just 0.055 seconds. This comes down to how clever the Cassowary Constraint Solver is.

Rather than attempting to re-solve the entire problem from scratch each time, it has an incremental system. It can re-use all its previous calculations and merely modify the results when you add, edit or remove constraints. This is why it can take several seconds to add a few 100 views in one go, but you can then resize that window rapidly, and have all the constraints be re-calculated and frames set.

Autolayout is slower than manually setting frames. It is generalising the solving quite complex layout problems across a whole UI. Having specialised algorithms focused on a single view is always going to be faster to run. Autolayout's advantage isn't in making layout faster at runtime, but in making it faster and easier for us to define layouts when coding.

Like with many of the tools we use, Autolayout takes advantage of the fact that we have an abundance of processing power, in order to make it easier for us to write apps. For the vast majority of use cases, and if used correctly, Autolayout is more than fast enough. It may sound like an excuse, but it's the same excuse we use to justify writing in our higher level programming languages instead of Assembly.

If you're looking for more on Autolayout, checkout the Autolayout Guide, coming Summer 2013.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Improving Autolayout tag:pilky.me,2013:view/35 2013-02-19T00:00:00Z 2013-02-19T00:00:00Z Martin Pilkington pilky@mcubedsw.com Autolayout has had a lot of bad press. A lot of people find it complex, confusing and more hassle than it's worth. They find the APIs a bit awkward to work with and the tools provided seem to work against them and break what they've done. I'm wanting to change that, so I'm working on various projects to help people learn and use Autolayout.

The Autolayout Guide

From my own experience with Autolayout, and from talking to others about their experience, I'm convinced that 90% of the problems with Autolayout are simply due to people's mindset. Autolayout isn't merely a more powerful form of what we had before, it's a complete conceptual shift. It doesn't help that there isn't much in the way of documentation or guides beyond what Apple provides.

I'm wanting to change that, which is why I have started work on a book called The Autolayout Guide. I want to provide a book that will teach people the conceptual side of Autolayout, the API and tooling side of Autolayout and finally give lots of examples of using Autolayout in real situations. There'll be more information on this as the book progresses. It is still in its early stages at the moment.


The reason I feel that 90% of the problems people have with Autolayout are due to their mindset, is because I feel the APIs and tools are pretty robust. However, they aren't perfect, which is what the remaining 10% covers. I've got two things I'm looking at to help improve the areas Autolayout is lacking.

The first of these is building an Autolayout Toolkit app, to help in debugging and constructing constraint-based UIs. This is quite a while off yet. My hope is to open source it in the future, but as I'm working on it in spare time, I can't say when it will be released in a usable state, nor what features it will have.

The second is M3AppKit. Over the course of my time writing apps, I've built up a series of useful methods and classes that I've been putting together in frameworks. I've spent the past few weeks tidying up these frameworks, removing anything that isn't essential, adding tests and writing documentation. The result has been the release of M3Foundation 1.0, M3CoreData 1.0, and today M3AppKit 1.0.

While M3AppKit contains many great things, but I want to focus on those that deal with Autolayout. The first is a category on NSLayoutConstraint. Making individual constraints in code can feel awkward, as you have the rather long +constraintWithItem: attribute: relatedBy: toItem: attribute: multiplier: constant: method. This method is great at exposing all the required components of a constraint, but isn't very good at expressing the intent of code.

The most common constraints you will make are size-based constraints or constraints on a view's margin to its superview. NSLayoutConstraint+M3Extensions adds a series of convenience methods to make this simpler and make code easier to read. For example, if you want a constraint to fix a view's width to 100pt, you would previously have had to do this:

[NSLayoutConstraint constraintWithItem:view 

That is quite a long method call. NSLayoutConstraint+M3Extensions lets you simplify it to this:

[NSLayoutConstraint m3_fixedWidthConstraintWithView:view constant:100];

The other major Autolayout related item is the NSView+M3AutolayoutExtensions category. This adds two methods. The first is simple enough, allowing you to add a subview while also setting its margin constraints like so:

[myView m3_addSubview:mySubview marginsToSuperview:NSEdgeInsetsMake(20, 0, 20, 0)];

That adds the subview and adds constraints to position it 20pt from the top and bottom of the superview and 0pt from the left and right (technically the leading and trailing edges).

The other method requires a bit more explanation, as its aim is to replace all the existing methods of creating constraints with something that is both incredibly flexible and very concise.

The Constraint Equation Syntax

At their most basic, constraints are just representations of equations in the form y = mx + c, your basic linear equation. And Autolayout is merely a linear programming solver, aiming to find the smallest solution to all these constraints.

The existing methods of creating constraints are somewhat awkward. The +constraintWithItem:… method is very verbose and only allows you to specify a particular constraint at a time. The Visual Syntax is more concise and lets you specify a lot of constraints at once, but limits you to working in one axis at a time, and doesn't even support certain layouts.

The Constraint Equation Syntax is my attempt at providing a way to be more concise and more flexible than either of these existing solutions. The best way to demonstrate this is with an example. Below is a basic layout:

Image of example layout

Now this is actually very simple to create in Xcode, but I want to show how we'd go about it in code. First we'll look at the existing methods. I'll try to use the visual syntax as much as possible to reduce the code required. Below is what we'd need to write to achieve this, assumed we'd already stripped away all the constraints in the NIB.

- (void)setupConstraints {
    NSDictionary *views = @{@"view1": self.view1, @"view2": self.view2, @"view3": self.view3};
    NSView *view = self.window.contentView;
    [view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"|-[view1]-[view2(==view1)]-[view3(==view1)]-|"
    [view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"V:|-(>=20)-[view1(==400)]-(>=20)-|"
    [view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"V:[view2(==400)]"
    [view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"V:[view3(==400)]"
    [view addConstraint:[NSLayoutConstraint constraintWithItem:self.view1
    [view addConstraint:[NSLayoutConstraint constraintWithItem:self.view2
    [view addConstraint:[NSLayoutConstraint constraintWithItem:self.view3

With the Equation Syntax we can simplify this to the following:

- (void)setupConstraints {
	NSDictionary *views = @{@"view1": self.view1, @"view2": self.view2, @"view3": self.view3};
	[self.window.contentView m3_addConstraintsFromEquations:@[
	 	//20 points between each view
		@"$view1.left = $self.left + 20",
		@"$view2.left = $view1.right + 20",
		@"$view3.left = $view2.right + 20",
		@"$self.right = $view3.right + 20",
		//Equal widths
	 	@"$view2.width = $view1.width",
	 	@"$view3.width = $view1.width",
		//Top and bottom marging of view1 >= 20 (we imply the superview when no other view is given)
		@"$view1.top >= 20",
	 	@"$view1.bottom <= -20" //This constraint technically isn't needed, but is added for completeness
		//All views 400pt tall
		@"$all.height = 400",
		//All views vertically centred
		@"$all.centerY = $self.centerY"
	] substitutionViews:views];

For starters you'll notice this is a LOT more concise. If we took out all the comments and blank lines then the Equation Syntax is 15 lines compared to 40 lines for the original. Admittedly I have split up method calls over multiple lines, but even if we sacrificed readability and condensed things as much as possible it's still 4 lines vs 11 lines.

You can reason a bit better about some of the constraints in the Visual Syntax. Margins are always positive, you can visually see how views are laid out in an axis. However, the Equation Syntax displays everything in a consistent and simple manner. The value of the left is equal to the result of the value on the right. You can better reason about what those values could be in your head, and as such better reason about how the constraints you've created will work together.

A great example is the seemingly bizzare $view1.bottom <= -20. This is a result of a convenience shortcut that lets you leave off the other view and attribute when you're constraining a view to its superview. Expanded this really means $view1.bottom <= $self.bottom - 20, and when you start throwing in values you see that this makes sense. If $self.bottom is 100, then the value of $view1.bottom is 80. If we remember the top left is the origin then we see why the constant is -20. Of course if we prefer positive constants, we could just re-write this as $self.bottom >= $view1.bottom + 20 as we have when defining the horizontal margins.

Any confusion with the equations vanishes when you start thinking in terms of the geometry and start inputting theoretical values. Coincidentally this also makes autolayout a lot easier to work with as you're starting to think in its terms of attributes rather than frames.

At the moment all of this stuff is Mac only. I'm hoping to work on iOS versions soon, especially as I want to use this myself on iOS projects. But it's the start of a larger project to try and help others grow to love Autolayout as I do, and be able to explore its full potential.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The PCC Elections tag:pilky.me,2012:view/34 2012-11-16T00:00:00Z 2012-11-16T00:00:00Z Martin Pilkington pilky@mcubedsw.com Yesterday the first clusterfuck elections were held for Police & Crime Commissioners. These are meant to be elected officials that oversee policing and crime prevention in 41 areas (excluding London). The turnout for these elections has been laughably low, ranging from 10% to just below 19%. These are the lowest peacetime election turnouts in history. Evidently many chose not to vote, me included. But why was that? I can't vouch for everyone but I can at least give my reasons.

One of the primary reasons is that we're in an age of austerity, yet we're somehow finding £100 million to throw away on an election that nobody really knows or cares about. Instead of spending so much money on a new election, why not spend it on… I don't know, more police?

The claim is that the PCCs will have more of a mandate than those unelected officials they replace. That is true, but it's not exactly a big mandate. The Tories like to talk about the legitimacy of trade union elections or the EU, yet those elections ended up having more of a mandate than these elections.

Another reason is lack of information. The only information I've really seen has been from the electoral commission. I didn't event know who was standing until a few weeks ago, when I saw it was Labour, the Tories, the Lib Dems… Wasn't the whole point meant to be that political parties wouldn't get involved and it would be largely local independents? Instead it's becoming a place for retired or failed MPs to go and try and get a job somewhere else.

But that's no excuse for not voting? If you're a true citizen you can go and find information rather than expect it to be spoon fed to you. Right? And not voting is purely down to apathy and you lose your right to complain about the person elected. If this was any other election I would somewhat agree with that. But I did go out and find information and what I found is that it wouldn't matter who I voted for. The policies for every candidate in my area were this:

  • Put more police on the streets
  • Reduce crime

Every single candidate. They'd all say they'll do the exact same thing, which coincidentally is the job description of the post of PCC. It genuinely does not matter. And to further compound that you need to look at the areas they could have differentiated themselves. Ending the criminalisation of drugs, protecting civil rights, reforming prison services. These are things they have no power over. All they can really do is redirect a bit of money (which is still effectively centrally controlled so they can't roll back cuts to policing budgets).

Effectively we're spending £100 million to elect 41 people to jobs giving them between £65k and £100k a year to just shift a few bits of money around in an already restricted budget. And this is why I didn't bother voting, and why I suspect many others didn't bother voting either.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

EXCLUSIVE: Apple Maps Press Conference Coverage tag:pilky.me,2012:view/33 2012-09-21T00:00:00Z 2012-09-21T00:00:00Z Martin Pilkington pilky@mcubedsw.com 8:30am: Hello and welcome to this coverage of Apple's October 12th Maps press conference. Apple called this conference in response to the uproar over their new Maps applications.

8:40am: We're noticing a lot of people filing in, dressed like tourists. I overheard one of them asking whether this was Buckingham Palace, and pointing to their iPhone claiming Siri sent them here.

9:00am: The music has stopped, Tim Cook is taking to the stage

9:01am: "I'm glad you all found your way here. I want to talk to you today about our fantastic new Maps application."

9:02am: "The response has been fantastic. Here's what one happy customer said"

9:02am: "'Maps is insanely great' - Jonny Appleseed"

9:02am: "Unfortunately some people have been complaining about the quality of our maps, so I'm here to tell you our solution"

9:03am: "Radar or GTFO!"

9:03am: "Thank you for coming today, and here to play us out, U2"

9:05am: U2 have taken to the stage and started playing "Still Haven't Found What I'm Looking For"

9:10am: U2 are finishing off with "Where The Streets Have No Name"

9:16am: And that's the conference over with. Now just to find my way home…

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Dot Dot Dot tag:pilky.me,2012:view/32 2012-03-15T00:00:00Z 2012-03-15T00:00:00Z Martin Pilkington pilky@mcubedsw.com Emacs or Vim? Tabs or Spaces? Mac or PC? There are many arguments between developers. One of the biggest ones in the Objective-C community is over dot-syntax. It's the argument that just keeps going. I'm on the side that doesn't particularly like dot syntax, but people often misunderstand the reasoning behind this position. So I'm going to outline it here.

Overloading an operator

One of the big no-nos in Objective-C is overloading things. We don't overload methods, we use different names. We don't overload operators, we use different operators. Dot syntax is the one exception though. The binary . operator is used both for accessing structs AND accessing properties. For the most part this doesn't cause a problem, but some properties return/take structs. This means you get wonderful things like:

self.view.frame.size.width = 20;

This doesn't set the frame's width to 20 as you would expect from the code. So we have the same operator used in one line of code meaning (and allowing) for different things. This is bad.

This could easily have been solved by using a different operator. Maybe object•property or object~>property or object@property or one of many other possible combinations. It removes all ambiguity (and ambiguity is bad in syntax) and preseves the concept that overloading is to be avoided in Obj-C.

Strong Typing

Objective-C is weakly typed. This is because C is weakly typed. This means I can do things such as "if (object)" without having to convert it to a proper boolean expression. Objective-C also has a mix of static and dynamic typing. This allows for the compiler to catch most problems while providing the flexibility that dynamic typing allows when you need it.

Unfortunately dot syntax is strongly typed. It checks at compile time that an object's header says it implements a method. If you are using static typing then this isn't an issue. Unfortunately many classes use dynamic typing. Here is a common example:

NSArray *myArray = @[@"foo", @"bar", @"baz"];
NSLog(@"%lu", myArray[1].length);

That is a compiler error. Why? Because NSArray returns objects of type id. However, if we switch back to bracketed syntax:

NSArray *myArray = @[ @"foo", @"bar", @"baz" ];
NSLog(@"%lu", [myArray[1] length]);

That isn't an error because it is weakly typed. So it has a different behaviour.

object.foo != [object foo]

In most cases the above title is false, but it isn't always the case. Take the following code:

@property (readonly, getter=isComplete) BOOL complete;
- (void)complete;
- (BOOL)isComplete {
	return YES;
- (void)complete {

That looks like pretty innocent code. But what if we try to access it, what will happen? Well lets use bracket syntax first:

[self complete];
[self isComplete];

Makes sense. As dot syntax should map to methods this should work the same:


Oh dear. It seems that dot syntax doesn't follow the same rules as bracket syntax. If we removed the -complete method then the dot syntax would still be valid, but the bracket syntax would throw a warning. Why? Because -complete doesn't exist.

Why Some Devs Dislike Dot Syntax

The problem isn't the idea of a simpler syntax for accessing properties. Every developer I know is in favour of that. The problem some developers have is with the implementation. It's painted by its proponents as a simpler syntax for accessing properties, but due to its implementation it should really be seen as a completely separate feature.

If they had used a non-overloaded operator and made it map exactly to how the bracket syntax works then there would be no complaints. Look at other features Apple has added to make life simpler. @property directly replaces code you would have written before. The new literals directly replace code you would have written before. Dot syntax doesn't. Dot syntax is like ARC. It works mostly the same as the old way, but with less typing. However it's not a direct replacement as it brings its own set of problems to deal with.

Depending on your viewpoint these may all be non-issues. But those of us who find these to be genuine issues are not some "old fashioned" programmers who are stuck in the past. Quite the opposite, we are often amongst the first to use any new API, developer tool or language feature. I love everything else about Objective-C, but this is the one thing I think is poorly designed. I have softened my stance a bit and found that I can cope with having to put in an otherwise unnecessary cast, so I now use dot syntax for getting values in my own code. I doubt I'll ever use it for setting them though, because I genuinely believe it leads to worse code.

You may disagree, but that's your choice that you've probably arrived at through reasoning over the past 5 years, the same way I (and many others who share the same view) arrived at mine.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Xcode 4.3 Review [Updated x2] tag:pilky.me,2012:view/30 2012-03-13T00:00:00Z 2012-03-13T00:00:00Z Martin Pilkington pilky@mcubedsw.com A new year and a new version of Xcode 4. This of course means that it's time for me to drop everything and try to find out everything that's changed. Thankfully (for me at least), this is a relatively small update feature wise. I've been seeing two different responses to 4.3. Half of people seem to be experiencing a lot of crashes, but the other half seem to be seeing major performance improvements. I will be honest in admitting that I haven't really noticed either, but then again I'm still sure I have a special "more stable" build of Xcode that few others have. But besides that, what else has changed?

/Developer is no more

Since Xcode appeared on the Mac App Store, it has been a pain to update. The reason being that the App Store downloaded an installer, and you had to remember to run that installer afterwards. To solve this problem Apple have turned Xcode into a self contained application. Rather than Xcode residing in /Developer, what was /Developer now resides within Xcode.

This does raise an interesting question: where are all the other developer tools? The most commonly used are included in Xcode. Instruments, File Merge, Application Loader, Icon Composer and Open GL ES Performance Detective can all be accessed from Xcode's application menu. The rest of the developers tools are available via connect.apple.com in various packages. This should mean that the download for Xcode is lighter, though you won't be guaranteed to get the most up-to-date versions of all the tools in one download.

Command Line Tools

As there is now no installer for Xcode, this means that things such as various command line tools won't be installed. You can still get them though through the Command Line Tools component, accessible via the Downloads Preferences pane. They can also be downloaded independently if you want to set up a basic development environment without the need for Xcode. To find out more check out Kenneth Reitz's blog post.

Update: User Defined Runtime Attributes

Mac OS X 10.6 introduced the concept of User Defined Runtime Attributes in a NIB. In a nutshell, these allowed you to specify some key path/value pairs that would be set on an object in a NIB when it is loaded. This allowed you to customise properties of views that weren't available in their inspector without having to resort to code, which is very useful for custom views. Unfortunately this was only limited to boolean, string, number and nil values.

I missed this when I first wrote my Xcode 4.3 review, but Apple has made some very significant additions. You can now create attributes for points, sizes, rects, ranges and colours. This gives a much wider variety of options for customising objects. I'd still love to see support for fonts, arrays and dictionaries but it is a major improvement, especially the addition of support for colours. Unfortunately your application must run on at least Mac OS X 10.7 or iOS 5.

Miscellaneous improvements

There isn't really much else major new, but there are quite a few smaller changes:

  • The "Convert to ARC" refactoring tool now supports conversion of garbage collected code.
  • A "Jump to Instruction Pointer" menu item that moves the text cursor to the line the instruction pointer is at.
  • The entitlements UI has been re-arranged a bit to make the file system access options clearer.
  • Mac NIBs now have controls on the canvas to create new autolayout constraints.
  • Autolayout constraints can now have negative constant values.
  • You can now modify the default class prefix setting in the file inspector
  • Archiving lets you export an app using a Developer ID, for use with the new GateKeeper in Mountain Lion

Update: It seems I forgot a few improvements in my initial review. I've added these below.

  • Code completion now works within macros (OCUnit macros now work in Xcode 4.3.1).
  • Creating new groups starts immediately editing the name.
  • LLDB is now the default debugger.

Xcode 4.3 more like a maintenance release than anything, tidying things up without the need to add new OS X or iOS specific features. There's not really much more to say than that, so I shall bid you farewell until the next release of Xcode.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Case Against Radar tag:pilky.me,2012:view/31 2012-03-01T00:00:00Z 2012-03-01T00:00:00Z Martin Pilkington pilky@mcubedsw.com I'll start off by saying that I'm not suggesting you should not file bugs with Apple, you should. I'm also not going to suggest that Radar isn't a valuable tool to Apple, it is. But it's time to get something off my chest, something I've been wanting to rant about for ages.

I fucking hate Radar and everything related to filing bugs with Apple.

I cannot begin to explain how awful it is. First of all the radar software itself is a really dated web app. It's hard to find radars you've already filed and takes a lot of clicks to do anything useful at all. This alone really puts you off wanting to bother as it is so much effort to file even the most basic of radars.

Then you get the attitude of people at Apple. Most of the time they are absolutely fantastic and asking to file a bug is something I'm ok with. But the passive aggressive "Radar or GTFO" or "well have you filed a radar?" attitude gets incredibly annoying and off-putting.

And what about when you finally do file a radar? Well it's a black hole. Occasionally you will get a reply asking for more information. More often than not I've found it is for information you've already given in the original report. For about half of your radars they'll be marked as a duplicate. You don't know what the original radar contained, nor its status at any point. Your involvement is essentially over. And of course for the rest of your radars they'll just be left untouched.

So basically the majority of radars you file will end up being of no use to you solving any issues. You just file them and forget about them and maybe they'll be fixed (of course it's hard to know given Apple's release notes are often so piss poor in listing radars fixed). And it's a colossal pain to do it. And then people wonder why many don't file radars.

Fixing Radar

So what can Apple do to fix it? Well first off they could make the web app not suck. That would be a big improvement. The ideal thing though would be to let us file radars from within Xcode. Let us bring up a "new radar" panel with a keyboard shortcut and start typing. Let us save reports as drafts so we can start them when we think of them but finish them when we have time, which will stop the all too frequent occurrence of thinking you filed a radar that you didn't. Add a checkbox for including the current Xcode project and automatically attach a system profile for bug types that require it. Also let us dual post to Open Radar (though this isn't necessarily needed if other things are changed). All of this is in rdar://8749276

Next, Apple needs to open up Radar. They always use the excuse of "radars contain private information". Let me tell you, the majority of them don't. Almost every radar I've filed could easily be posted publicly without any problems for me, Apple or anyone. Privacy is an issue in some cases so Apple can just provide a checkbox to let us make sure a radar is open. Then they can then let us search these radars ourselves, which could save a lot of time if we could just file a "me too" rather than a lengthy report (rdar://10965656)

Actually reading radars would be useful as well. They do read most of them but sometimes they come back with incredibly stupid requests or dismiss any issue you have.

Finally, let us up vote radars. That way we don't have to file a "me too". If we can see a radar already filed that we have a problem with, we could simply up vote it as something important to us. And we could add our own comments to it and make it more a collaboration than lots of islands of secrecy. It would also reduce Apple's workload as they don't have to read and mark radars as duplicates. (rdar://10965728)

Most developers want to file bugs and get things fixed, but as it stands it is a colossal pain to do so, with little visible benefit, especially if the work to write a radar is large, and sometimes the attitude of Apple can completely dissuade you from wanting to bother. If I could change one thing about Apple's developer story it would be radar. Simplest thing I would do? Replace Radar with something like Stack Overflow.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

All I Want For X(code)mas [UPDATED] tag:pilky.me,2011:view/29 2011-12-23T00:00:00Z 2011-12-23T00:00:00Z Martin Pilkington pilky@mcubedsw.com I've spent a lot of time and words this year talking about what Xcode does have. At the same time I've spent quite a few tweets pining for certain features or wishing pain on those who are responsible for annoying bugs. I thought it would be an interesting idea to put together my personal wish list for the future of Xcode. These aren't in any particular order, though I have marked which are bugs I'd love to see fixed and which are features I'd like to see added. And finally I've included radar numbers where appropriate to allow you to file duplicates if you so desire.

[UPDATE 8/2/13]: I've struck through the titles of those items that have since been fixed


Bug: Fix performance

This is one of the biggest complaints people have about Xcode 4: it is so friggin' slow! I'm on a 2.93GHz Quad Core i7 iMac, which by all accounts is an incredibly fast machine, and it can feel really slow at times. Slow opening files, slow showing and hiding the UI, slow searching for documentation. It also goes through memory like it's going out of fashion. It's not unusual for me to quit Xcode and gain up to 8 GB of RAM back on my system.

Bug: Fix the indexer so it doesn't periodically break

The other big bug is that indexing periodically stops working. Either the index gets corrupted somehow or the indexer seems to hang. Either way, it happens often enough to affect almost everyone. The problem is that the indexer powers pretty much everything in Xcode 4, so when it breaks you lose code completion, syntax highlighting, fixits, refactoring and more. It has improved with each release but it still happens too frequently to ignore.

Feature: Let me drag a tab to a tabless window

So I have a window (A) with a tab. I decide "this tab would be much better on this other window (B)". So I drag the tab from A to B. Unfortunately I'm unable to drop the tab on B because it doesn't have any tabs yet, and so doesn't have a tab bar. Unlike Safari, Xcode won't let me drop the tab anywhere but a tab bar. This means that the only way to perform this rather trivial operation is to create a temporary tab to show the tab bar.

Feature: Let me dock a window as a tab

This is sort of the inverse of the above request. I would love to be able to take a tabless window and dock it as a tab on another window, without having to create a temporary tab.

Bug: When opening a tab with a behaviour, open it in the frontmost window

And now for one of the most annoying bugs. I have a bunch of behaviours set up to show various tabs. The problem is, when these tabs aren't visible, it isn't guaranteed where they will show. Sometimes they show in the frontmost window, but sometimes they open as separate windows. Ideally it would always be the former, but the latter does happen and ends up leading me to curse the tabs system in Xcode.

Feature: Let me assign a main window for a project/workspace

Most users of Xcode 4 have hit this one. You spend ages working on something, it gets to the end of the day and you close your workspace by clicking the red button at the top. Then *it* appears. The small, insignificant but oh so annoying extra window. You opened it hours ago and forgot about it, it was hidden. But now it just sits there laughing at you, because it knows that next time you open this workspace you're going to have to set EVERYTHING up again.

So please Apple, let me assign a certain window as the "main" window. This window should always be opened when the workspace is, and ideally closing it should also close the workspace.

Bug: Stop forgetting my key bindings

Do exactly what it says on the tin.

Feature: Visually separate build from other scheme actions

I love schemes, they are fantastic. However, even I admit they can be a bit hard to grasp at first. A big part of the problem is the edit schemes window. It somewhat betrays how they actually work. When I first saw schemes I thought from the icons that it ran through them top to bottom, doing a Build, then Run, then Test, then Profile etc. And visually they all appear as equal operations.

In reality though you have 5 operations: Run, Test, Profile, Analyse and Archive. Before all of these can happen though, something has to be built. What would make more sense to me, and hopefully make schemes easier for others, would be to separate build from the other rows, to show that it is a distinct type of step.

Bug: Escape doesn't dismiss preferences window or edit scheme sheet

In almost every application on OS X, escape will dismiss a sheet or a preferences window. Unfortunately Xcode 4 thinks that's too mainstream.

Feature: Build Radar into Xcode

Filing radars is a pain. First off you need to open a browser window. Then you need to log in. Then you need click new problem and then you get a lovely web form to fill out. And it gets even better when you click submit and find radar has logged you out and lost your bug. It takes a lot of effort to write a bug in the first place, without all this extra cruft.

Now imagine that Radar is built into the organiser. You can see all your radars and browse them quickly. You can also write new radars. Even better than this, if you choose a radar type that requires a system profiler, it will automatically generate and attach one for you. It also allows you to attach screenshots and even a project that Xcode knows about. It removes all the friction and encourages people to file more bugs.

Feature: Plugin support

I honestly was shocked I hadn't filed this radar until writing this post. I assume almost everyone who's used Xcode for as long as I have has filed a similar one. Basically we want extensibility. We want to be able to build upon the foundations of Xcode and add new features that Apple may not have time for, or may not even have thought of. Almost every other IDE supports this, and internally Xcode seems to have several plugin systems. It's a shame that it doesn't have one other developers can use. Especially as several of the feature suggestions I give could potentially be implemented by 3rd parties.

Code Editing

Bug: Fix code completion in macros


Feature: Add replace all in selection

Nearly 2012 and Xcode 4 doesn't have a replace all in selection option. I am genuinely surprised that this hasn't been fixed yet.

Feature: Add support for C#-like regions

C# and Visual Studio have a really cool feature called regions. Regions are effectively just preprocessor blocks that can be collapsed by the editor. Think of a #pragma mark, but which also had an end block and could be collapsed. I long for this in Xcode as it would let me hide huge chunks of classes that I don't need to see right now.

Feature: Add the "search for symbol in documentation" feature back in

The biggest uproar I can remember when Xcode 4 first came out was the loss of the command-option-click shortcut. Performing that on a symbol would use it as the search term for a documentation search, which allowed for you to get to the docs for a method really quickly. I'd love to see this back in, as quick help is largely useless in those areas and it's quite a bit slower to perform a search manually.

Feature: Create a fixit to add methods to the header or stub them in the implementation

Some people hate headers. Personally I think they're a fantastic tool. What we can all agree on though, is that Xcode could do a much better job helping us work with them. There are two common scenarios: you have something defined in your header that you want to implement, or you have something defined in your implementation you want to declare. In the former case Xcode could offer to create stub methods in your implementation for any unimplemented methods defined in that header. In the latter case it could offer to either add the method to the header, or to a class extension in your .m.

Feature: Create a fixit for adding missing #imports

Xcode can already warn you if you're missing an import for a class or a framework. It should also know about the location of many of these missing classes or frameworks and whether they are linked into the project. It would be useful if it could put 2 and 2 together and offer a fixit for importing missing headers.

Feature: Colour header defined methods differently to non-header defined methods

I always find it useful to know if a method is public or private when calling it. Unfortunately, there is no ideal way to distinguish between them in code. The best looking way is to prefix with an underscore, but Apple doesn't recommend that. You could prefix with other characters, but that's kind of ugly. Xcode already lets you colour instance variables differently to local ones, so why can't it colour calls to header defined methods (i.e. public methods) differently to those defined in the implementation (i.e. private methods).

Bug: Remember what was collapsed when switching files

I don't use code folding in Xcode 4. A large part of that is because code only stays folded as long as you have that file up. As soon as you switch away it unfolds everything. It would be great if this at least persisted while Xcode is open, but even better if it remembered the state between launches.

Feature: Dragging media from library to source should generate code

I'm not 100% how many people would use this, but it's an interesting idea nonetheless. The media library is filled with video, audio and images, which can all be dragged out. At the moment they simply insert the name of the resource if they are dragged to code. It would be useful if it instead generated the appropriate source code to use that resource. Say I drag the NSUser image to source code, it would add [NSImage imageNamed:@"NSUser"].

Feature: Extract string as constant refactoring option

All good programmers know to use constants for common string values, especially keys and notifications. This allows the compiler to check for misspellings and for Xcode to offer code completion. But often the first time through you write a plain string as it's quicker when you're prototyping. Or maybe you're getting some code from a bad programmer (no cookie) and it is litered by these strings. Xcode could offer an "extract strings as constant" refactoring option, that would do most of the hardwork for you. It couldn't be perfect (two strings could have the same value, but have different constant names based on their semantic use) but it could do a lot of the groundwork for you.

Feature: Allow for refactoring protocol method names

A pretty simple one. Xcode doesn't support it. It should.

Feature: Automatically show quick help docs in code completion list

If you've ever used Visual Studio, you'll know it has damn good code completion. One of the things it does really well is show a summary of the item selected in the completion list. This helps when you're not 100% sure of which item to choose. Now I'm constantly going on about how useless quick help is, but this is the one area I think it would be really useful. The Xcode team already realises this, which is why you can manually bring up quick help in the code completion list. I want to see an option to have this automatically appear after a short delay though.

Bug: Take NSStream out back and put it down

Quoth the radar:

Number of times I've used NSStream in 7 years of Cocoa development: 0
Number of times I've used NSString in the past week: >100

Percentage of time Xcode suggests NSStream instead of NSString as a the best completion: >50%


Debugging & Testing

Feature: Exclude certain exceptions from exception breakpoints

This is one I came across the need for a few days ago. I sometimes want finer grain control over which exceptions I break on. In some situations I don't want certain exceptions to be break, such as when I'm running tests that check that an exception is thrown.

Feature: Breakpoint groups

A common situation when debugging a feature is to have several breakpoints scattered throughout several files as you follow the code path. Unfortunately the breakpoint navigator groups breakpoints by file, not by the code path you're following. On top of this you often want to enable and disable several breakpoints at a time. Breakpoint groups would solve that by letting you collect breakpoints together into groups that make logical sense to your app.

Feature: Add "log selection in breakpoint" command

NSLog. Every Cocoa programmer knows it well. But in order to use it you need to recompile your app, and then have to remember to take them out. There is another way though, and that is breakpoints. You can set a breakpoint to log a variable and continue. This doesn't require a recompile, can't be accidentally shipped and can easily be turned on and off. Unfortunately it's a pain to make, so Xcode could help out by letting you select a variable or a bit of code, right click it and choose to create a "log breakpoint" from that code.

Feature: Add a test navigator

Unit testing has seen rather large improvements in Xcode recently, granted it's coming from a rather lacklustre start. But viewing results is still a pain to deal with as Xcode treats unit test failures as identical to build errors. Odd as it may sound for something written in Java, the JUnit test runner UI does a really good job of this. It has the oft cited green/red bar to denote success failure, and lists the tests that passed and failed. It also gives information such as the number of passes, tests run, failures and errors as well as the time taken to run in the UI, all which you need to go look at the log to find in Xcode.

Having a new navigator in the sidebar dedicated to running tests, based on the example set by JUnit, would truly make testing a first class citizen in Xcode.

Bug: Actually use the full value of the test host setting

This is a bug that hit me recently and caused me to wish pain upon all those who worked on Xcode. When unit testing you usually define a test host setting, which is the path to binary the tests will be injected in to. It seems Xcode ignores the "path" part and just takes the binary name and searches for that in your build products directory. That isn't a big deal until you have multiple targets for the same app, with the same binary name. This means Xcode will pick one of them, and it may not be the one you want.

Feature: Bring UI testing to the Mac

Simple one. Instruments has support for UI testing iOS apps, I'd like to see that on the Mac.

Bug: Make LLDB work without crashing

LLDB is still too crashy to use. There are many crashes that happen, but the only one I've been able to semi regularly reproduce is using the 'po' command, which will at some point cause Xcode to hard crash. By hard crash I mean it quits faster than I've ever seen it quit, providing no crash log or any indication of what went wrong. I long for the day LLDB is stable enough to use regularly, because it is so much faster than GDB.

Interface Builder

Bug: The media library is stuck in 2007

Looking at the media library you realise there aren't all that many images there. In reality though Cocoa provides many more standard images, including a standard folder, status indicators and trash icons, but they aren't listed. In fact the media library doesn't show anything added since Leopard.

Feature: Support 3rd party code better in IB

IBPlugins are dead. I was hoping they'd return but they haven't. This means that we've actually taken a step back in the quality of tools for building UIs for Cocoa apps. Apple really needs to add better support for 3rd party code. There are two possible ways to do this.

The first is simply to improve the User Defined Runtime Attributes sections. It already supports strings, integers and booleans, but it also needs to support images, colours, fonts and probably a few other types. Add in key path completion like with bindings and it would allow for the removal of a lot of code.

The other way is to allow headers to be annotated. We already have IBOutlet and IBAction. Why not also have an IBAttribute, that we can put next to a property and maybe define some options on. Xcode could then dynamically generate an inspector for the class from this information.

Feature: Add NIB comments as objects on the canvas

NIBs can be rather complicated. Thankfully we now have a nice big canvas, so why not use it to have comments associated with views, rather than cramped in an often hidden inspector. Below is a mock up I supplied with the radar I filed, that uses a NIB and stickies to show what it would look like. We've got this extra virtual space, why not use it?

Feature: Show comment with selection not working

Related to the above, in IB3 you could have comments appear next to a view when it was selected. The checkbox for this has been in Xcode 4 since the beginning, but doesn't actually do anything.

Feature: Add tooltips to IB inspectors again

The thing I miss most from IB3 aren't actually IBPlugins, but the tooltips on inspectors. Hover over a control in an inspector and it would tell you which API it actually represented, which was invaluable when you wanted to know how to do something programatically.

Feature: Re-add groups to object library

In IB3, the object library used to collect objects into groups. Unfortunately the option to view object groups in the library has gone, making it a bit harder to find what you want.

Feature: Convert constants to values in inspectors

Quite often you know a constant name, but don't know its value. Unfortunately the many inspectors in IB require you to enter in values directly. It would be really cool if you could type in a constant name, hit enter and IB would substitute it with the appropriate value. Even more so if it kept the constant there to aid refactoring.

Project Navigation/Management

Feature: Code bubbles: GIMMIE

I'll just link to two videos. The first on the general code bubble concept. The second on Microsoft's Debugger Canvas which implements similar functionality for Visual Studio debugging. If you don't see the benefit of such a navigation system then something is very, very wrong with you.

Feature: "Playlists" for lightweight file groups

Navigation in IDEs sucks. Stepping through code is pretty well supported, as is jumping to a certain point in code. But the higher level navigation sucks really bad. We logically group our files in layers, such as models, views and controllers. The problem is we work in features, which span all these layers. People often comment on Xcode 4 looking like iTunes as though it is a disparaging remark, but I think it could take something from iTunes: playlists.

Imagine lightweight groups, which you could drag references to code into. You could throw one together for each feature you work on, meaning you don't have to search all over your source tree to find the files and methods you need for the feature you're working on.

Feature: Allow project references in a workspace to be specified relative to a source tree

Source trees were one of those weird features I didn't quite get in Xcode. Then it actually clicked how useful they are. For those who don't know, they allow you to specify custom points from which to base a file reference. By default Xcode gives you options such as an absolute path, relative to the group, relative to the project etc

So one day I had to move my workspace file, and it broke all my references to my framework projects. I realised that it would be useful to set up a source tree to my framework source folder and link the framework projects relative to that. It also gives the added benefit of being able to have my framework source folder be independent from my workspace location. Problem is, Xcode doesn't allow this for projects. Cue sad Pilky.

Feature: Jump to callers command

We have a jump to definition feature, but no inverse. It is hard to be completely accurate with jump to callers in a dynamic language, as you can't really know that until runtime. However, you can find a lot of the places it will be called from, and indeed you need to already to perform many refactorings. Being able to click on a method declaration and see a list of all the places it is called from would solve a lot of problems that at the moment require a less intelligent project find.

Feature: Moving a file to a group in Xcode should move it to that group's folder on disk

Xcode lets you group files in the file navigator. You can link those groups to folders on disk. If you create a new file in that group it will default to putting it in the linked folder on disk. All really good so far. Unfortunately if you then try to move a file into or out of that group, it doesn't move on disk. It's a pretty basic operation you may want to do but it requires so much effort, moving around in the finder then fixing broken references in Xcode, and it's something that should be handled for you by Xcode.

Feature: Bring back bookmarks

I never found a use for bookmarks in Xcode 3. Now they're gone I can see how useful they could be. Being able to mark arbitrary points in your code and give them names can provide an easier way to navigate your code. Combined with the 'playlist' idea above it could give so much more flexibility in how you organise and navigate your code.

Feature: Allow two workspaces to reference same project

If you deal with frameworks, you may end up with the same project being added to multiple workspaces. This is useful for editing the framework inline with the app, but also for things such as debugging. The problem is, a project can only be open in one workspace at a time. This causes all sorts of issue, especially if you open them in the wrong order and the workspace you want to reference is opened first, meaning the workspace you want to work on won't build.

I'm sure other people have their own requests. Maybe it'd be useful for you to post your own along with radar numbers. For the most part Xcode 4 is a solid IDE, with very well thought out core principles. The only thing missing from giving it a solid foundation is stability and performance. Once it has that, then the Xcode team can really start powering ahead with new features to make our development lives easier.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Xcode 4.2 Review tag:pilky.me,2011:view/28 2011-11-07T00:00:00Z 2011-11-07T00:00:00Z Martin Pilkington pilky@mcubedsw.com The great Apple software release of 2011 happened a few weeks ago, bringing the likes of iOS 5 and iCloud. But we don't really care about those in this post, what we care about is Xcode 4.2. If Xcode 4.1 was the Lion release, Xcode 4.2 is the iOS 5 release (although many of the improvements apply to the Mac as well). So lets get cracking and see what's new and improved:


The big new feature for iOS developers is Storyboards. Apple obviously noticed that the majority of applications can be defined as screens and the transitions between them. Unfortunately it was hard to design and structure these, requiring the UI to be split across multiple NIBs and transitions to be handled in code. This ended up requiring a lot of what was effectively boilerplate code to control the flow of your app. Storyboards aims to simplify that.

The easiest way to sum up Storyboards is "all your NIBs on one canvas". It is a higher level view that essentially lets you design and view the UI and flow of your entire app in one screen (and indeed you can zoom out on a Storyboard when you want to see your entire app on one screen). I have talked a lot about the potential features that could be enabled by the improvements introduced in Xcode 4 and Storyboards is a shining example of this. There are 3 key concepts with Storyboards: Scenes, Segues and Relationships.

Scenes are effectively your NIBs. You create them by dragging view controllers to your Storyboard. You can design the UI in their view just like you would any NIB, and below the view you get a dock just like your NIB to hook up connections. Segues are the transitions between scenes. You just drag from a UI element in one scene to another scene and choose the type of segue to use. Built in segues include pushing onto a navigation view, showing a view modally, showing a view in a popover or just replacing the view. You can also create your own custom segues if you have your own transitions you'd like to do. Finally there are relationships. A view controller will have a relationship to content views, for example a split view has master and detail sections. Relationships simply say which view controller relates to which section.


Storyboards also simplify table views. There are two types of tables you can build. Dynamic prototype tables allow you to design your different cells within the actual table, much like view based NSTableViews on the Mac. You drag your cells in, lay them out in the NIB and give them a reuse identifier. The data source code is pretty much the same, except you no longer have to manually create cells in code, you always get one back from the table.

The second type of table is a static table. Many applications use tables for static lists, but even though they never change you have to write all the code for the datasource. Static tables lets you create these in a NIB. The datasource is handled for you, so you can focus on designing and laying out the table rather than feeding up basic content.

All in all, Storyboards is probably the biggest new feature in terms of code you no longer have to write since iOS was released. I suspect it will quickly become the default way to build iOS apps, though how well it handles more complex apps remains to be seen.

NIB Editing

Storyboards aren't the only improvement for UI editing, the general NIB editing features have seen a lot of nice improvements. There is support for the new appearance APIs in iOS 5, letting you set tint colours and images for most controls. You can now also configure gesture recognisers on views from within the NIB, again saving you lots of code over previous versions.

Autolayout sees some improvements. The constraints section no longer closes every time you edit a constraint and there is now a visual distinction in the list view between the constraints you manually set and those generated by IB. Unfortunately you still cannot set negative constraint constants, despite it being perfectly valid in the API

Finally the assistant editor sees a nice improvement with connection points appearing in the gutter next to code you can connect to. This means you can finally drag from code to objects.

LLVM 3.0

One of the most significant changes in Xcode 4.2 is in the compiler department. It seems a long time ago that we first saw the Apple LLVM Compiler 1.0 in Xcode 3.2. It promised faster compile times, faster binaries and better error messages. This compiler has moved on to become the cornerstone of many of the technologies Apple uses and builds, such as Open GL, Open CL and Xcode 4. Many of the changes seen in Xcode over the past 12-18 months have been made possible due to adopting Clang and LLVM.

Xcode 4.2 brings us version 3 of the LLVM compiler (it's worth noting that Apple's version numbers are more for Clang than for the LLVM backend, which is currently at version 2.9). For Cocoa developers the biggest change here is the introduction of Automatic Reference Counting. I won't go into ARC into too much detail here, but essentially it gives you the benefits of a garbage collector, without all of the downsides. I will say that from my experience it requires more work to use than the Objective-C GC, but it's a nice improvement to have. ARC is enabled by default in all new Xcode projects, and is easy enough to turn on via a compiler flag. There is also a refactoring option to upgrade your existing code as you desire.

Other improvements include support for the C++ '11 standard. It isn't a full implementation yet, but it is a start. And for those who are interested in code coverage, LLVM 3 also adds in support for generating gcov data files. The static analyser has seen improvements as well, with support for C++ and Obj-C++ files, as well as the ability to customise what checks are performed in the build settings of a project.

Finally, we say goodbye to a compiler. Historically GCC has been the backbone of Apple's developer tools, but now the transition away is complete. Xcode 4.2 ships with just two compilers: LLVM-Clang and LLVM-GCC. If you want GCC itself you'll have to build it from source or download from elsewhere (which is probably a better option anyway as Apple's GCC fork is rather behind). But for Cocoa development you'll have to at least adopt LLVM-GCC, but preferably LLVM-Clang as I suspect LLVM-GCC might only be an option for another few years, much like classic or rosetta, in order to aid the transition.


The new hotness in debugging is aimed at those working with OpenGL ES. I won't pretend to know what I'm talking about as I have only used OpenGL at university (something I keep meaning to correct when I get the time), so most of this will be what I have understood from Apple's presentations.

The OpenGL ES Debugger allows you to capture a frame from an app running on a device. You do this by clicking a button in the debug bar in Xcode. It will then bring up this frame in a debugger and allow you to see everything used to build that frame. You can see all the draw calls (and optionally have them grouped by using some OpenGL ES extensions Apple have provided) and step through them one by one to see how your frame is built. Xcode will even highlight which part of the frame is currently being worked on in the main display. On top of this you can use the assistant to inspect various resources that are relevant to the project or are bound to the frame, such as textures and shaders.

There are lots of other improvements to debugging as well. The most important of these is that LLDB actually seems stable enough to use for once. Previous versions have caused Xcode to crash pretty quickly, but I've actually been able to get some debugging done with LLDB. Another nice improvement is a menu item to continue debugging to the line the text cursor is on, which previously required the mouse to do.

For iOS development, application data can be stored in archives and loaded onto the device at runtime to aid in debugging and testing. We also finally have the ability to fake location data, allowing us to change the location to anywhere in the world, or even set up paths, which saves a lot of trips out that were required previously. There is an option to detect wireless connected devices, but I haven't actually been able to get this working, nor find any real information on it, so I'm not entirely sure whether it allows for the fabled wireless debugging or not.

Miscellaneous Improvements

These are the improvements I've noticed that don't really fit anywhere else. Firstly the preferences window has had a bit of a re-arrange. There is a new Downloads section, where you can download doc sets, but also now support for older OSs and SDKs for testing and debugging purposes. The Source Trees section has also been moved into Locations to tidy things up a bit.

There are improvements when creating new projects and files. You can specify the class prefix to use for the initial files in your app. And Apple has added the name field for creating classes and categories back, preventing all those times when you'd type your class name in the subclass field by mistake.

While iCloud is one of the big new products Apple has released, the changes to Xcode to support it are rather minimal, with a new section added to the entitlements to enter your iCloud containers.

A new feature I know many people will absolutely love is the return of sorting in the navigators. You can now sort items by type or alphabetically. There are also some improvements in code completion. Autocompletion of block parameters with void return types and arguments no longer puts in the unnecessary voids, and if you're changing an #import, the remainder of the file name will be automatically deleted when you complete, useful for the many times you end up with a stray .h

For users coming from other IDEs such as Visual Studio or Eclipse, where they're used to double clicking to open a file, Xcode 4.2 adds a preference to treat a double click as a single click, which should hopefully help make the transition easier.

Finally the organiser has seen some improvements. Macs are now shown in the devices panel, allowing them to be easily added to the developer portal. And the documentation viewer is MUCH faster. It's still slow in places but it's a dramatic improvement over 4.1.

Missed from Xcode 4.1

Sometimes after doing my reviews I'll miss some new feature that was added. There are two of these that I know about from Xcode 4.1 so I thought I would give them an honourable mention here. The first is an improvement in selecting fonts. Rather than bringing up the full font panel straight away, Xcode 4.1 introduced a popover providing a quick way to change the font, including selecting various system font styles.

Another useful change is in Core Data model files. They were previously in a binary format, which are always a joy to use with version control. If you set the tools version on your model to Xcode 4.1 it will update it to a new XML based format, making it much easier to merge any edits.


Xcode 4.2 is a pretty sizeable improvement. It feels noticeably more stable and faster in several key areas, and many minor niggles have been fixed. New features like Storyboards and customising control appearance should dramatically change how iOS apps are built and allow you to remove huge amounts of code. And finally the removal of GCC in favour of LLVM 3.0 signals yet another break from the past of Apple's developer tools, one that will be complete when LLDB replaces GDB.

Surprisingly I've had no radars to file for Xcode 4.2 as of yet. Those issues I've found are ones that affected previous versions and have been filed for a while. This version shows how Xcode 4 has matured. It's no longer the brand new, rather exiting albeit very buggy IDE we saw released at the start of the year. It's more stable, more complete and is starting to showcase the sorts of features that the changes made possible. This is likely the last major Xcode release, and therefore the last review, of 2011. I'm looking forward to what Apple will give us in 2012 and how it will build on the successes of this year.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Dick tag:pilky.me,2011:view/27 2011-10-07T00:00:00Z 2011-10-07T00:00:00Z Martin Pilkington pilky@mcubedsw.com Lots of people have been writing posts describing what Steve Jobs's passing and life meant to them and how he influenced them. I've been struggling to figure out how to put my thoughts together and whether I really wanted to, as others have said most of what I wanted far more brilliantly than I could. However, someone who generally pisses me off, pissed me off to an even greater degree than usual. Said someone is Richard Stallman, or as I shall refer to him henceforth, Dick (as suggested by Justin Williams). The reason for this will soon become apparent.

For those of you who don't know about Dick, well he's pretty much a guy who fights vigorously for open source in the name of "freedom", dismissing anyone who doesn't fit into his myopic world view as the enemy and generally being arrogant and pig headed. I'll leave it up to you to make the obvious connection to certain other political movements and leaders.

Usually Dick just spouts his propaganda or heckles people who are trying to improve our education systems. But this time Dick has truly lived up to his name with a rather hatefilled few paragraphs about Steve Jobs:

Steve Jobs, the pioneer of the computer as a jail made cool, designed to sever fools from their freedom, has died.

As Chicago Mayor Harold Washington said of the corrupt former Mayor Daley, "I'm not glad he's dead, but I'm glad he's gone." Nobody deserves to have to die - not Jobs, not Mr. Bill, not even people guilty of bigger evils than theirs. But we all deserve the end of Jobs' malign influence on people's computing.

Unfortunately, that influence continues despite his absence. We can only hope his successors, as they attempt to carry on his legacy, will be less effective.


"Freedom" is an overused word, much like "open", and generally means "my way of thinking". What Dick is saying is that we should work towards his view of "freedom". His view is that people should be able to do what they want with whatever technology they have. If I want to run OS X on my toaster then I damn well better be able to. I should be able to hack the internals of every piece of software and hardware and heaven help anyone who makes that hard in any way.

There is nothing inherently wrong with that view, nor in achieving it. But it's something very few people actually care about and something that should almost never be your number one priority as Dick desires. His view of freedom is something that helps him and his supporters help themselves. It lets them hack to their hearts content. But it also makes technology complex, awkward, fragile and scary to the vast majority of people. This isn't to say that closed hardware and software can't do that, in fact the vast majority of it is equally as bad. But that isn't what Steve Jobs or Apple wanted. Steve Jobs didn't care about "closed" vs "open", he cared about "great" vs "crap". Closed and open were merely tools that could be used to create great products.

Steve Jobs cared about freedom as well, but a freedom that was much more useful to the majority. It was freedom from the fear of technology, from having to look after technology. It was the freedom given by simply making technology usable.

My Grandma is a great example of what this freedom allows. She always dismissed and ignored the games my brother and I played on various consoles. They seemed complicated and a waste of time. Yet when we first showed her a Nintendo Wii, it was completely different. It was easy to learn and fun for her to play. She didn't have to sit down for hours and perform lots of challenging tasks, she just had to pretend to throw a bowling ball. She had never before played a computer game, but here she was enjoying it.

Similarly she doesn't own a computer. Yet she loves buying things at auction and would come round so we could help find her auction houses. She would also come round for help researching places to go on holiday. But every time she would need me or my Dad sat there to help guide her through and do the majority of the work. Yet when we used the iPad, she was able and willing to do much more herself. It just felt easier.

These aren't isolated cases, there are many stories out there of people who previously were left out of technology being brought in. From the elderly to the very young and everyone in between. This new wave of technology is opening up a whole new world to these people. And the reason for this is that those behind the technology don't care about specs or features or openness as the primary driver behind their products. Their focus is instead on people, and enabling them them to achieve things that were previously unachievable.

This is no more evident than in marketing videos. The vast majority talk about the specs and how the screen size and the processor and the ports will help you. But instead look at Apple's adverts. They talk about reading books, cooking, sharing memories, learning. They don't talk about the specs of the camera with regards to video conferencing, but show you the scenario of a soldier on tour being able to see a sonogram of their unborn child from half way around the world. These are things that people care about, everything else is incidental. I love Apple's 'We Believe' iPad advert as it sums this attitude up so well.

This is what we believe. Technology alone is not enough. Faster, thinner, lighter - those are all good things. But when technology gets out of the way, everything becomes more delightful, even magical. That's when you leap forward…


That is what Dick talks about when he talks about Steve's legacy. What Steve did wasn't the iPhone or the iPad or the Mac or Apple or Pixar or any of the other things he played a part in creating. No, what Steve did was to be one of the strongest voices out there in favour of using technology, not for technology's sake, but to make the world a better place, to make people's lives better. Dick is that focused on the fact that large parts of the things Steve worked on are closed, he's missing the real drive and the real achievement.

Anyone who strives to make technology easier, more enjoyable and more accessible are Steve's successors. I like to think that I'm one of those people. I know many of my developer and designer friends are in that group as well. And we will not stop, we will not falter and we will not be any less effective. We will continuously strive to use technology to make people's lives better. Not for Apple and not even for Steve, but because it is the right thing to do. And if Dick is truly against those ideals and is set in his own tiny world view, then I have only one thing to say to him…

Think Different or GTFO

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Final Cut Pro X Mentality tag:pilky.me,2011:view/26 2011-09-03T00:00:00Z 2011-09-03T00:00:00Z Martin Pilkington pilky@mcubedsw.com Apple is no stranger to throwing out the old and changing things around. You can build something incredible, that makes lots of money and makes many people happy, but if you let that stagnate then people start to get restless. It's how the new kid on the block comes along and steals your thunder. So you have to change. People will get annoyed and even angry, as human beings are generally averse to big changes, at least to begin with.

But change is important, change is what makes you survive. There is a wonderful quote by Darwin about natural selection:

It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.

This not only applies to life, but any competitive system. Businesses and products that adapt best to change are the ones that survive, and there is no sector that changes more quickly than technology. In the past 20 years Mac devs have seen the change from 68k to PPC, from Classic to OS X, from Carbon to Cocoa, from Quickdraw to Quartz, from PPC to Intel, from 32 bit to 64 bit, from mostly single core CPUs to up to 24 virtual cores (and the average number of virtual cores in a new Mac being 7.5). And finally we also have the change from only having the Mac to having the iOS platform to target as well.

Adapting to change

Apple has some very mature applications: Xcode, Final Cut Pro and iMovie to name a few. They're all over a decade old, in one way or another. Final Cut Pro and iMovie dated back to the late 90s, with Xcode's heritage spanning back even further. As a result they weren't built for the modern world and had many fundamental flaws. The problem is that people end up just working around these fundamental flaws and accept them as how things have to be. Often this view can spread to the developers as well, which is incredibly dangerous.

All 3 of the applications I mentioned have received major updates that have completely re-thought the fundamental problem they're trying to solve. They aren't applying more tape to hold everything together, they're tearing it all down and doing the job properly. iMovie '08 was the first of the changes, followed by Xcode 4 and Final Cut Pro X. With each version, the first reaction of most people was annoyance and anger. They were less stable and less capable than their predecessors. Apple didn't care about its users, it was dumbing everything down.

But while people complain about the things that are missing, they lose sight of the huge changes that happened to push the apps forward. Missing features can be re-implemented in the future, but getting the core foundations done both internally and in terms of workflow is far harder to do. Get those wrong and you'll be back to square one. Get them right, and not only can you add the missing features, but also features that weren't even possible before. You can redecorate your house if you don't like it, but if the foundations crumble you won't have a house at all.


Yesterday Brent Simmons posted on his blog about Xcode's pane management and how much it frustrated him. He had a lot of good points about how awkward it can be to lay things out. Now Xcode does offer solutions for these issues, mostly in the form of behaviours, but this post isn't about Brent. Michael Tsai linked to Brent's post and someone left a comment there that stuck with me:

The FCPX mentality invades all new Cupertino software. Devs aren't immune from the syndrome.

As Dear Departed Leader was fond of saying, folks like Brent who prefer 'pro' workflows will die out, after all. "No compromises" in the brave new world.

There are many people who would see this "Final Cut Pro X mentality" to be a bad thing, it leads to buggier software, less capable software and more dumbed down software. Based upon iMovie 08, Final Cut Pro 10.0 and Xcode 4.0 they might have a point, but it's incredibly short sighted. As I said in my Xcode 4 review, we shouldn't judge on what it is, but what it shows it will be. Look past the bugs and the missing features to the potential new features. Don't look at the version now, but the next version and the version after.

Consider two pieces of software we as developers are used to now: LLVM and Mac OS X. Who remembers what Mac OS 10.0 and the first version of LLVM were like? They were buggy. They were missing features. They were dumbing down! Yet now they're things we couldn't live without. While Mac OS X was buggy and lacking at first, the foundations they made are what Apple's success with the Mac, the iPhone and the iPad are built upon. The same with LLVM, which now powers most of the smarts behind the developer tools as well as technologies such as OpenGL and OpenCL. What we have now couldn't really have been built upon the classic Mac OS or GCC.

The "Final Cut Pro X mentality" isn't something to be shunned or looked down upon. It should be held up as a good attribute, a willingness to improve things in the long term, even if they may be bumpy in the short term. It may seem like they're crazy to take such a risk, and indeed Apple has had to start selling FCP Studio again and had to keep iMovie 6 and Xcode 3.2 around, but it's a trait we should all try to have. I look at these new versions and see what the future could hold and I'm excited, and so should you be. So don't put down Apple or anyone else who takes a risk on the future rather than playing it safe with what worked in the past. Instead praise them for what they're doing.

Here's to the crazy ones…

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Finding Derived Data tag:pilky.me,2011:view/25 2011-08-21T00:00:00Z 2011-08-21T00:00:00Z Martin Pilkington pilky@mcubedsw.com Prior to Xcode 4 there existed the Build folder. I hated this folder, especially as I often wanted to zip up a project and send it to someone, and I'd end up having a huge file because I'd forgotten to delete the Build folder. So imagine my delight when Apple created the Derived Data directory in Xcode 4. This directory contains all the build products, intermediate files, indexes, logs etc. It is usually located in ~/Library/Developer/Xcode/DerivedData. The problem is, it is rather hard to find the particular Derived Data directory for a particular project…

The Problem

If you're inside Xcode's build environment then you're ok, as you can use the BUILD_PRODUCT_DIR setting. If you're outside though there is no obvious way to find it. The problem is that the Derived Data directory, if placed in the usual location, will be of the format <projectname>-<hash>. Now there's no public documentation of what that hash is, so it seems like there's no way to find out which folder you need if you have multiple projects or workspaces of the same name

The other, less thought about issue is that a user can choose to put their derived data folder elsewhere. It can be placed in a project relative location, or just in a random path elsewhere on the system. So how on earth do we find out where the directory is located for a given project?

The Solution

The solution it turns out is quite easy. Lets take the two problems in reverse, starting with the case of the user located directory. When you change these setting, an extra file is added to your workspace or project. This is called WorkspaceSettings.xcsettings and is located in the xcuserdata/.xcuserdatad folder of your workspace bundle (note that if you have a project, it will contain a project.xcworkspace bundle).

This file is just a plist so easily readable. Our first step is to check the IDEWorkspaceUserSettings_DerivedDataLocationStyle value. If this is 1 we have a full path, if it is 2 we have a project relative path. Next we look at the IDEWorkspaceUserSettings_DerivedDataLocationStyle value to find the path. Simple enough.

If we don't have a WorkspaceSettings file, or the LocationStyle value is 0, we want to look in our standard Derived Data location. But we still have those pesky hashes to deal with. Well thankfully Xcode provides us with a way around that. We can whittle down the folders by looking for those prefixed with our project or workspace name. Now what we need to do is look in each one, where we will find an info.plist file. This file contains a single value with the key WorkspacePath. As you can probably tell this contains the path to our workspace or project, which we already know and so can match up.

And voila, we can now access our project's Derived Data directory outside of Xcode. And for my encore I just need a babbelfish and a deity…

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Cupertino, We Have a Problem tag:pilky.me,2011:view/24 2011-07-26T00:00:00Z 2011-07-26T00:00:00Z Martin Pilkington pilky@mcubedsw.com Lion is a great OS. It has brought many great user features (Versions, autosave etc) and many fantastic developer features (Autolayout, popovers etc). It has brought one new feature though that is annoying a lot of users: Mission Control. I've mentioned some complaints I've got about it on twitter, but it's got to the point where I really need a blog post to convey my full opinion on it. I can't quite remember another change in an OS X update that was so big, yet so awful.

For those who don't know, Mission Control is a sort of replacement for Exposé and Spaces that tries to combine the two, along with full screen apps, into one central location for managing windows. It sounds good in theroy, but as we'll see the reality isn't great.

What I like

It isn't all bad and there are some things that I like. Firstly, you now have a different desktop background per space, which is a nice change. The Mission Control UI is also prettier than the old "All Spaces" view in Snow Leopard. And probably the best change is that you can create and delete spaces from within Mission Control, removing the trip to System Preferences.

The problem is, that's pretty much all I can find good about it. So what about the bad?

What I hate, but can turn off

By default Mission Control makes Dashboard a space to the left of your first space. It sounds nice at first, but you soon realise that it's a lot more work. Pressing an F-key anywhere is easy, but having to flick several times to the left, or bring up mission control, move the cursor to dashboard and click quickly becomes tedious. Thankfully you can turn this off.

The other annoying feature that can be turned off is having spaces re-arrange based on your last use. I can sort of see this being useful to some people, but I often end up losing where things are with it turned on, and if I need that sort of functionality I can use cmd-tab which feels a lot more natural.

What I hate and can't turn off

You might as well get comfy as this is a big list. I'm splitting it up into the 3 areas Mission Control encapsulates:


I have long been a user of virtual desktops. I used Desktop Manager on Panther and Tiger and since Leopard I've been a big fan of Spaces. I usually had 4 spaces laid out. Space 1 was general stuff such as iChat, Skype, NewsLife, Mail and usually Safari. Space 2 had Photoshop assigned to it. Space 3 was for web development and Space 4 was for Mac development and had Xcode assigned to it. I could easily flick through these 4 spaces and had a good spacial awareness of where everything was on my computer. I could easily switch from Space 4 to Space 1 and back with a flick on my mouse. I also had iTunes assigned to all spaces so I could control my music from any space.

Sadly this system that I've been more than happy with for getting on 7 years has been completely destroyed with Lion. Spaces no longer loop round, from the last to the first. This has effectively limited the number of useful spaces I can have. I used to have 4, now I can only manage 2 and have to do a lot of manual re-arranging. I might just be able to manage 3 but it's felt awkward when I have. (rdar://9592853)

While you can still assign windows to certain spaces, or all spaces, this is incredibly buggy in 10.7. Whenever you restart your machine, it forgets them and so you have to reset all your assignments. (rdar://9592870)

There is a lot of gratuitous animation and graphics in Lion. It's ironic then that they actually moved one of the nicest bits of animation from Snow Leopard. With iTunes assigned to all spaces, when I switched space it would stay put, while the other windows slid away. In Lion, any apps assigned to all desktops slide out with the current space and then just appear on the new space. It feels tacked on. (rdar://9844274)

If you have multiple monitors, you may not always have them turned on. I often have times I turn on my iMac but not my second display to do something, but occasionally I need to access a window that I left on the second display. This was easy pre-Lion. I hit F8 to bring up the All Spaces view and saw all the spaces for both monitors on my main display. I could then drag a window from one display to another. This is completely gone in Lion. Firstly it shows windows on the display they were on, rather than on a single display. Secondly, you cannot drag a window from one display to another in Mission Control. (rdar://9844324)

It's so bloody slow. When you switch a space, you have to wait for the movement animation to stop and for the desktop icons to fade back in, before you can move in the opposite direction and the delay between the movement stopping and icons fading in is relatively massive. (rdar://9844555)


You cannot use the Exposé All Windows mode across all spaces in Mission Control as you could in Snow Leopard. This means there's no way to truly see all windows on the system. Instead you have to go into Mission Control on on space, then switch to the next space and the next. (rdar://9844318)

Windows are stacked with the recent most one first. Given how much ridicule was given to Windows Vista's 3D stacked window chooser in terms of usability over Exposé, it's disheartening to see Apple falling for exactly the same mistakes in Lion. (rdar://9844311)

You can no longer click and hold on an app's dock icon to go into the App Windows Exposé mode either. I didn't actually realise how much I used this until I found it was gone in Lion. (rdar://9844305)

Full Screen

And finally we have full screen apps. For laptops these are fantastic, but if when you get onto larger displays and especially multi-display setups, it completely falls apart in its usefulness. Secondary displays are largely useless in full screen apps. Sure you can have certain windows such as inspectors on a second display, but that's more a cop out than anything.

You should ideally be able to make an app go full screen on a particular display. This could be as simple as press and hold on the full screen button to bring up a list of displays. (rdar://9817280) You should also be able to loop displays individual when you're in full screen apps. This would allow you to have a main app on the primary display and several apps on secondary displays which you can loop through individually. (rdar://9817307)

Finally, on big displays it just feels wrong to have the toolbar at the top of the screen in full screen. It feels more like it should be on the bottom, especially with Safari. (rdar://9817332)

Overall, Mission Control is a mess. It has nobel aims, but it feels like it was built by someone who'd only ever had a cursory glance at Spaces and Exposé and who hadn't used any Mac other than a MacBook Air with no secondary display. I seems too complex a system to appeal to regular users, but too awkward a system to appeal to power users. I'm incredibly happy with most of Lion, and can avoid most of the bits I don't like. But Mission Control is both unavoidable and the one bit of Lion I wish I could revert back to how it worked in 10.6. Hopefully Apple fixes at least some of the huge array of flaws in upcoming Lion updates.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Xcode 4.1 Review tag:pilky.me,2011:view/23 2011-07-25T00:00:00Z 2011-07-25T00:00:00Z Martin Pilkington pilky@mcubedsw.com Another Xcode release, another review of what's new. Xcode 4.1 coincides with the release of Lion and includes many improvements, some to help adopt new technologies in Lion and some just to make Xcode a better IDE. So lets get started with what's new.

General Improvements

To start with I'll go through a few of the general improvements. One of the big complaints with Xcode 4.0 was stability. We'd long been spoiled by the comparative stability of Xcode 3.2, so to go back to pre-3.2 levels of stability was incredibly frustrating. This new version feels a lot more stable. Assertion failures are a rarity, rather than part of the daily workflow as was the case in 4.0. I've experienced very few crashes, but then again I personally experienced very few in 4.0. If stability has been one of the things keeping you from switching to 4.0, then 4.1 will make you much happier.

Lion introduced the concept of full screen apps, and Xcode 4.1 is proper full screen citizen. I have mixed feelings about the full screen support, which generally extends to the feature for Lion in general. I can see it being incredibly powerful on laptops, where screen space is at a premium, but it seems incredibly half baked everywhere else.

When developing I usually have my workspace on my main display, but my second display can contain all sorts of stuff. It's not unusual for me to have a web page for reference, the documentation window, Lighthouse Keeper for issue tracking, a terminal window, other text files for reference and iTunes, all open on my second display. Full screen effectively nullifies that. There is also the other issue that your running app will be in a separate space to Xcode, which making it harder to debug. I recommend trying it out to see if it suits your circumstances, but don't expect it to be a ideal in every situation.

Xcode 4 vastly improved the UI over previous versions of Xcode, but did have some oddities. The schemes toolbar item, which was previous one single item for setting two values, is now a path control. This allows the active scheme and target platform to be changed independently. Apple has also removed the strange button section next to filter bars in navigators, replacing them with properly sized buttons and icons you can actually distinguish clearly. However, they still haven't replaced the odd filter bar icon with something that makes more sense like, well… anything else really.

Old (left) vs new (right) filter bar icons

Interface Builder

Lion brought with it lots of great new APIs, many of which you can find out about in this post from Ole Begemann. With these new APIs comes improvements to Interface Builder to support them. At the most basic you get support for new controls such as popovers, text finders and inline buttons, as well as the nice improvement of a custom NSFormatter object you can drag to controls. There are 3 more important changes though:

UI Identifiers

Lion brings with it the new Resume feature, to make it easier for developers to restore application state between launches. One of the important parts of this is giving UI elements identifiers. There is now a standard identifier property that can be found in the Identity inspector (this can be confusing at first if you're used to using views with their own identity property such as table columns). This identifier can potentially be useful in places other than Resume though. View based tableviews use them, and they are accessible via the Accessibility API which allows for many interesting possibilities.

View-based Tables

iOS developers generally don't know how good they have it when it comes to providing custom cells for a table. They can use the nice, clean, familiar APIs of NSView for their cells, but on the Mac we've long had to deal with the monster that is NSCell. Lion vanquishes this monster by introducing view-based tables. Now Apple could have stopped at just making it work like UITableView on iOS, but instead they've made it much easier to use.

Much like with iOS, view-based tables hold a queue of views that can be reused, and you should first ask the table for a cell from its queue before creating one yourself. This creation can be done simply by instantiating an object, but can also be done from a NIB. On the Mac, NSTableView is heavily optimised for the latter case. You don't have to create a separate NIB for your cells, but instead can just drag an instance of NSTableCellView from the Library to your table, and the table creates its own self contained NIB within itself. All the cells in your table can then be edited within the single NIB. You then simply assign these cells an identifier and then all you need to call is the following method:

[myTableView makeViewWithIdentifier:@"MyCell" owner:self]

The table view will then pass you back a cell from its reuse queue, or if one doesn't exist, handle the creation of a new cell from the NIB for you. This helps remove a lot of the boilerplate code required in iOS and shift more of the cell design to IB, in a much more elegant way.


My favourite new developer feature in Lion is Autolayout. This is an incredibly powerful layout system for AppKit that blows away anything else I've seen on other platforms. It allows you to make incredibly complex layouts, that previously may have taken 10s if not 100s of lines of code, all from the comfort of IB. For the most part IB uses the standard HIG guides you will have seen when building your UIs, to implicitly define constraints, but you can explicitly define your own if you need more accuracy (and I've found I often do).

There are also some incredibly annoying bugs. If you try to access constraints by the object list view, whenever you modify anything the constraints section closes (rdar://9728875) and IB doesn't entirely match up to the capabilities of the API, as constraint constants cannot be set to negative values (rdar://9717632).

I won't go into Autolayout in much more detail here as I'm planning a rather extensive blog post to cover it in the near future.


Behaviours were a nice little addition to Xcode 4, allowing you to customise a few parts of the UI. In 4.1 they have improved dramatically. Firstly they let you perform a lot more actions. You can now show or hide the utilities sidebar and show or hide the toolbar. You can define what editor and debugger views you want to be in, and also enter and exit full screen.

The really big change though is not the actions, but the behaviours. It is now possible to add custom behaviours, that can be accessed via the menu bar, or through keyboard shortcusts. This is great if you want to have tabs set up for different tasks. I have an editing UI with no utilities panel and the file navigator visible, and a NIB UI in a different tab with the utilities visible and a file navigator filtered by 'xib'. I can switch between the two with cmd-1 and cmd-2. I then have a "show debug" behaviour set up as cmd-4 which switches to the debug navigator and brings up the debugger view.


There have been various improvements to the editors in Xcode 4, many fixing things that were simply missing in 4.0. The CoreData editors were inexplicably missing the ability to set the model identifier or the current version of a model, both of which have made a return in 4.1. A bit easier to understand, but possibly missed by more people, was the lack of assembly or pre-processed output generation for your files. This has made it back into 4.1, integrated into the assistant, so you can easily view the assembly for a file side by side with the source.

Another improvement, that I almost completely missed, is that jumpbars no longer have type to select. Now you may be thinking "why is that an improvement?". Well, in place of type to select is type to filter. As you start typing in any jumpbar, a filter bar appears and the contents of the menu is whittled down. This feels so much better than type to select if, like me, you cannot always remember what letter a method starts with. The matching algorithm seems the same as used in the open quickly panel, so your search string doesn't have to match one continuous string.


Schemes have had a few minor improvements. First of all, the UI scaling control has been removed, due to the change in how resolution independence works in Lion. There is now finally a checkbox for NSZombie in the Diagnostics tab, which was oddly absent in 4.0. Another nice enhancement is a filter bar in the test section, which is incredibly helpful if you have a lot of tests.

Project Management

There are a few small improvements in terms of project management in 4.1. The first is the ability to "modernise" a project. What this does is go through your project and look for any setting that maybe aren't what Apple recommends. These could be defaults that have changed, or even settings that have been completely removed. It's a very useful way to update those build settings you may not even know about.

The other improvement is a UI for specifying entitlements. This is important over the coming months as we approach Apple's deadline where they require new App Store submissions to use the new sandbox. This is largely a case of selecting a few checkboxes, but of course many other issues may appear for certain classes of apps, most requiring code changes, but in some cases causing big problems (in which case, please file bug reports with Apple, as I have found them more than willing to listen to the needs of developers with regards to the sandbox).


The debugger has seen a very significant improvement in Xcode 4.1. Often you have to do a lot of typing in the debugger console, of variable names, method names etc. This could lead to errors as you're having to type out everything by hand. Well now you can rejoice as Apple has added code completion to the debugger console. It provides completion related to the current scope of the debugger, so it will complete any local variables know about at the point your application is paused at, including any method or function names.

Unfortunately the other side of debugging hasn't seen quite as big an improvement. LLDB is still virtually unusable for more than basic experimentation at the moment. It isn't causing Xcode to crash quite as often, but I'm still finding it far too unstable to use as my main debugger. For now my recommendation is the same, try it out and see if it works for you, but don't expect it to replace GDB quite yet.


In my 4.0 review you may have concluded that I had some problems with the documentation viewer, enough for me to describe it as "a steaming pile of donkey shit". While 4.1 hasn't fixed all my complaints, they have fixed a few of the biggest issues I had. Firstly, Apple has brought back search as you type, which makes life so much easier if you're not 100% sure what search term to use. They have also removed the annoying outline view results, replacing them with the flat list that was in 3.2. While they haven't fixed everything, I no longer hate using the documentation viewer, which is in itself a vast improvement.

Improved key bindings UI

In my last review I talked about the improvements to the key binding editor in 4.0. Well 4.1 brings even more improvements, both to the UI and to the completeness of key binding support. Firstly, there are key bindings for pretty much everything now. Many editors such as the hex editor, property list editor, version editor, etc. were missing from the list of key bindings. You can now also change the key bindings for accessing the various inspectors and libraries.

UI-wise, it is much easier to work with key bindings. 4.0 provided key bindings as a flat list. There was no distinction between what was in the root of a menu, what was in a sub menu and what was an alternative to a menu item. In 4.1, menu items list their "path" if they are in sub menus and alternate items are grouped together, with the alternatives greyed out.

Old (left) vs new (right) key bindings UI

The scope bar also has two new items, for conflicts and customised items. The customised item is pretty self explanatory, it shows which items you've given custom bindings. The conflicts UI is rather useful though, showing any potential issues of the bindings within Xcode, but also with systemwide shortcuts.


I said that Xcode 4 felt more like a 1.0 than a 4.0, given how much had changed. 4.1 is the first real update to it and does not disappoint. Many features that were missing, probably due to time constraints, have finally made it in. Stability has improved a lot, but could still be better. Many of the new features have also seen minor, but highly significant improvements.

Xcode 4.0 was largely laying the foundation upon which the future can be built. It was a transition that caused a lot of controversy amongst some, much like what Final Cut Pro X has done. What a lot of people may forget, or may have even not experienced, was that the transition from Mac OS 9 to Mac OS X was just as controversial, if not more so. It was unstable, it was lacking many features people were used to, but it laid the foundation. And look at what has been built upon those foundations: Lion, iOS, iWork, iLife. Not a single person would look back now and say that Apple was mad to do what they did back in 2001, taking a dated piece of software, that was falling further and further behind the competition, and rebuilding it into something designed for the decades ahead. Xcode 4.1 is our first glimpse of what is going to be built upon the new foundations laid down by Xcode 4. When 4.2 comes out later this year we'll likely see even more.

If you haven't switched from Xcode 3 yet, I strongly recommend starting the transition soon. If you want to get the most out of the latest SDKs, you'll need to be using Xcode 4, and going forward everything is about Xcode 4. It will be rough at first, with much swearing and cursing because things have changed, but that's to be expected. I went through exactly the same thing. But eventually you will adapt and get over the differences. It's also important so you can truly find the pain points and can file bugs. I listed 18 radars in my initial post, 6 of those have been fixed, either directly or indirectly in 4.1. Apple may be the ones who make Xcode, but we are the people who they make it for and the ones who they want to listen to.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Finding Equality tag:pilky.me,2011:view/22 2011-05-23T00:00:00Z 2011-05-23T00:00:00Z Martin Pilkington pilky@mcubedsw.com So, I find myself here again, talking about the powder keg topic that is equality. My last post caused lots of argument on twitter. Things seemed to have calmed down and everyone had gone away to reflect on things. Today Faruk, who's initial post prompted my last post on the subject, posted a sort of rebuttal to my post and another person's post. It is a much more reasonable and well articulated post, but I still disagree with several points. Now what I ultimately found through the arguments on Twitter is that Faruk and I both want the same goal, but we disagree somewhat on the means by which to achieve it.

First off, I want to outline the three key points I'm going to make, just for those who don't like lengthy posts:

  1. Conferences are a symptom, not a solution

  2. I'm not immune from negative discrimination because I'm a white man

  3. Discrimination is wrong, but accounting for differences isn't discrimination

Those are my 3 key points. You may agree or disagree, with them already, but I hope you'll read of the rest of the post to understand why I make those points.


So my last post was focused around conferences. Now I admit I may not have articulated my point as well as I could have, but I still stand by the basic point, at least in the dev industry.

One thing that became clear, is that there are wildly different views depending on whether you're looking at design or development. I talked about things from the view of the dev industry, where we only have 20-25% women. As Faruk and many other have pointed out, in design women make up the majority (with figures of 60%+). This isn't very surprising to me, given the gender makeup I've seen in the various art and design classes I've been in. This difference can affect the importance you put on conferences.

If you already have a relatively balanced gender mix, then obviously you focus more on conferences. You haven't really got the problem of getting women into the industry. If you don't have the balanced community though, then fixing conferences is like trying to fix a leaky pipe on the Titanic. Sure, it helps a bit but it doesn't do much in the grand scheme of things.

On this point, when I was arguing that it's more important to change the makeup of the community first, rather than focus on improving conferences, Faruk replied:

Martin uses this as a means to argue against what Mike Monteiro and I (and many others) are fighting for, which is ensuring the presence of at least one woman and one minority group member in a conference line-up.

My argument there was largely that we need to avoid having a situation where we have "the token woman" or "the token black person". Everyone feels better if people are chosen based on quality. Attendees don't have to put up with crap speakers, and the speakers don't feel like they're there just to make up the numbers. You should try to get a diverse mix of speakers, but they should also be good. Ultimately if you're doing a good job of seeking out speakers then you'll naturally get a diverse speaker mix, but to enforce some constraint is more idealism than any pragmatic solution.

Ultimately though, conferences have minimal effect on the problem of getting people into an industry, which was the main point of my last post. Go up to anyone not yet in, or interested in, an industry and ask them to name a conference in that industry. Odds are they can't. Conferences are very insular. If you've solved the problem of getting a diverse mix into your industry then it's worth looking into making conference reflect that. Otherwise, you can change conferences all you like, you're going to make little impact on the imbalance in your industry.


Right, now for what may be the most controversial statement. Being a white man does not make me immune from negative discrimination. I hear this from many groups, even other white men. And it is absolute bullshit. White men may not get quite as much discrimination based on our skin colour or our gender, negative discrimination does happen. And it pisses me off when I see quotes like this:

This brings us to Substantive equality: the principle of taking action to “redress disadvantages suffered by some groups.” Of course this concept creates tensions and is met with resistence: nobody likes experiencing discrimination—least of all a group of people who have never been discriminated against their entire lives, and have enjoyed all the privileges of that fact. Subconsciously or otherwise.

Or this:

Returning to those barriers, it is understandable that people who have lived privileged lives don’t readily acknowledge them: they’ve never truly experienced what discrimination feels like, and so don’t know what to look for. Perhaps this explains their fear of and strident resistence against positive discrimination: they don’t know what it’s like but they know from history and culture that it’s a terrible thing to experience.

I was bullied for the largest part of my time at secondary school. I wasn't beaten up or anything, but I was constantly put down and laughed at. It wasn't because I was white or I was male. It was simply because that, while I was very intelligent, I was not the quickest on the uptake. I often said silly things, or did silly things. Or maybe it was something I didn't say or do. Give me enough time and I can work out anything, but put me on the spot and I'm not necessarily that good. Because of this I was picked on and I felt miserable and my confidence suffered greatly.

Now I would love it if I could challenge those "white men have never felt negative discrimination" folks to go up to my 13 year old self and say that the bullying doesn't count and it isn't really discrimination at all because I was white and male. And I've love to see them as they cowered away and realise the drivel they've been spewing about this. Of course I have an advantage as a white man that others don't have, and I don't try to pretend otherwise. But in the bluntest terms, those who say I'm immune from discrimination because of that can go fuck themselves.

Now that that rather emotional rant is out of the way, let's look at other ways that this fallacy that white men can't ever be discriminated against, and so can't ever understand discrimination, is wrong.

First off, we have the case of parents. Look at the case when two people with children divorce or split up. The legal system automatically assumes that one parent should have the main custody and that more often than not that should be the woman. Just as much as the idea that women should be the child carers is negative against women, it is also negative against men who want to be involved. Now there can be cases where the two people who are splitting up come to a reasonable and fair agreement, but equally there can be cases where there is an unfair agreement, more often than not favouring the mother because of the assumption that women should have a bigger role.

Secondly we have cases like car insurance. Now I've argued about this before, and had people say how it's ok that men get charged more than women for insurance because the statistics state that men are more likely to have accidents. The statistics also say that women are more of a risk to employ as they could have a child which would cause them to miss work and have to go on maternity leave. But pretty much anyone who has any sense would say that that's no reason to discriminate against women or pay them less or anything else like that. If it was a case of you paid more car insurance because you're a woman, or because you're black, or because you're gay then there would have been a massive deal made of it long ago. But because it is against men hasn't been see as a big deal.

Ultimately, white men as a group experience less discrimination than other groups. But to suggest they experience no discrimination and cannot understand what discrimination is, is incredibly ignorant, as much so as trying to write off the discrimination of any group.


What he doesn’t realize is that my “world view” here is an as-neutral-as-possible perspective through the lens of a significant body of research, statistics, and a great deal of knowledge on how women (in particular) are treated differently throughout their entire lives.

Faruk is again referring to me here. Now I'm not saying that his view of the current situation is wrong. In fact the research proves it is that way and I happen to agree with how he sees the problem. We just disagree on how best to solve the problems. There are lots of differences between how different groups are treated. Now I'm just going to focus on men and women here rather than other groups.

While there are differences in how men and women are treated, there are also differences in how they act. Countless studies show different ways men and women see themselves and see others, different ways in how they think and perceive the world. So how do we ultimately address these? Well as my point at the top of the post says, discrimination is wrong. We shouldn't just put women in front because they're women. But we should accept that given how things are now, women have an inherent disadvantage.

Now Faruk and I could ultimately be asking for the same thing, but calling it something different. I believe that society needs to accept and understand these differences, and approach people in a way best suited to the individual. I watched a really interesting video yesterday from the Scottish Ruby Conference, where a psychologist gave a talk on why there aren't enough women in the dev industry and what both men and women can do to change that. She talked about how small things can make a big difference and how men and women act differently.

Two good examples are that of environment and of applying for jobs. The environment can play a huge role. Geek culture can have a rather negative effect. One study the presenter mentioned was that of a computer science classroom. Women were less interested in computer science as a field, if they were shown round a classroom full of Star Trek posters and video games. But put up posters of nature and replace the video games with phone books, and women were just as interested as men. Ultimately the environment can play a role, as people have a stereotype of programmers, and women can find that stereotype more off-putting than men when looking into the industry. If the environment plays up to that stereotype, it won't do much to help.

When applying for jobs there are two components the presenter mention: CVs and job descriptions. With CVs, men make more of a deal about themselves whereas women don't necessarily do that. A man is more likely to talk about a project "I" did, whereas a woman may be more likely to talk about being part of a team that worked on a project. And with job descriptions, how they see their qualifications are differently. Both men and women can look at a set of required skills, and even if they know the same amount of stuff, the man is more likely to think he's qualified. So the job description, while seemingly neutral, can cause women to write themselves off, and those that do apply may have CVs that make them seem less capable than men, simply due to how they word them.

Knowing these differences is incredibly useful. They show that you can potentially have an impact by doing some relatively simple things. Maybe re-word your job description. Put less emphasis on CVs and more on interviews, or at least look at CVs from men and women in the context of being written by a man or a woman. And maybe aim for a bit more neutral environment, be it in the classroom, in the workplace, at a conference or even online.

Equality is an incredibly complex and difficult topic. There are lots of nuances to contend with. Because of this people can be arguing for almost the same thing, but it can seem like they're on opposite sides.

Ultimately discrimination based on gender, race, sexuality, etc is bad, and reversing it will do no good. If you imagine a seesaw that naturally balances itself, at the moment it has lots of weight on one side. Now we can try to balance it by putting more and more weight on the other side but we'll likely overshoot and then have to put more and more weight on the first side. This continues ad infinitum. Alternative you can just remove the weight and let things balance themselves out.

Positive discrimination assumes that in order to gain equality you need discriminate against the advantaged group. Instead I believe that in order to gain equality you need to remove the discrimination against the disadvantaged group. Only a fool considers society equal today, but similarly only a fool considers inequality a means to achieve equality. As we understand more and more the inherently differences between men and women, we should be adapting. The fact is that men and women ARE different and should be treated as such. Forcing women to behave more like men isn't going to get us anywhere, nor is the reverse. Instead, accept the differences. The current system works really well for men as it is geared to them, so we shouldn't necessarily change that side of it. What we should do is adapt it so that the system also is optimised for women.

Consider the situation as though it were two operating systems. Say men are Windows and women are OS X. Now imagine all the software was designed for Windows and OS X users had to run it. It wasn't optimised for them and Windows users had an advantage. Someone proposes that to make this more equal we need to design software for OS X instead, but then Windows users have a disadvantage. Instead the obvious solution is to design the Windows version for Windows and the OS X version for OS X and play to the advantages of each. People are different, and the only way we're going to achieve equality is to accept that not everyone is identical.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Dynamic Tips & Tricks with Objective-C tag:pilky.me,2011:view/21 2011-05-19T00:00:00Z 2011-05-19T00:00:00Z Martin Pilkington pilky@mcubedsw.com Over the past couple of years there has been a large influx of Objective-C developers. Some are coming from dynamic languages like Ruby or Python, some from strongly typed languages like Java or C#, and of course there are those who are new to programming altogether. But this means that a large number of Objective-C developers haven't been using it for all that long. When you're new to a language, any language, you focus more on fundamentals like the syntax and core features. But it is often the more advanced, and sometimes less well used parts of a language that really makes it shine.

In this post I'm going to give a bit of a whirlwind tour of the Objective-C runtime, explaining what makes Objective-C so dynamic, and then go into various techniques that this dynamicness enables. Hopefully this will give you a better understanding of how and why Objective-C and Cocoa work the way they do.

The Runtime

Objective-C is a very simple language. It is 95% C. Language wise, it simply adds some extra keywords and syntax. What makes Objective-C truly powerful is its runtime. It is small, yet incredibly flexible. At the core of it is the principle of message sending.


If you're from a dynamic language like Ruby or Python, you likely know what messages are and can skip over the next paragraph. For those from other languages, read on.

When you think of invoking some code, you think of calling a method. In some languages, this allows for the compiler to perform extra optimisations and error checking as it is a direct and clear relationship between what is being called and what is being invoked. With message sending, this distinction is less clear. You don't need to know if an object will respond to a message in order to send it. You send off the message and it might get handled by the object. Or it could be passed along to another object. A message doesn't need to map to a single method, an object can potentially handle several messages that it funnels through to a single method implementation.

In Objective-C, this messaging is handled by the objc_msgSend() runtime function and its cousins. This function takes a target, a selector and a list of arguments. In fact at a conceptual level, the compiler simply converts all your message sends to calls to objc_msgSend(). For example, the following are functionally equivalent:

[array insertObject:foo atIndex:5];
objc_msgSend(array, @selector(insertObject:atIndex:), foo, 5);

Objects, Classes & Metaclasses

In most OOP languages you have the concepts of classes and of objects. Classes are blueprints from which objects are formed. However, in Objective-C, classes are themselves objects, which can respond to messages, which is why you have the distinction between class and instance methods. In concrete terms, an object in Objective-C is a struct, who's first member is called isa and is a pointer to its class. This is the definition from objc/objc.h

typedef struct objc_object {
    Class isa;
} *id;

The object's class is what holds the list of methods it implements, as well as a pointer to the superclass. Now that makes sense for objects, but classes are also objects. This means a class also has an isa variable, so what does it point to? Well this is where the 3rd type comes in: metaclasses. A metaclass is to a class, what a class is to an object, i.e. it holds the list of methods it implements, as well as the super metaclass. To get a more complete understanding of how objects, classes and metaclasses fit together, read this post by Greg Parker which explains them incredibly well.

Methods, Selectors and IMPs

So we know that the runtime sends messages to objects. We also know that an object's class holds a list of its methods. So how do those messages map to methods and how are methods actually implemented?

The answer to the first question is pretty simple. The method list in a class is essentially a dictionary, where selectors are the keys and IMPs are the values. An IMP is simply a pointer to the method's implementation in memory. The really important thing though, is that this connection between the selector and the IMP is determined at runtime, not compile time. This allows us to play around with it, as we'll see later.

The IMP is usually a pointer to a function, where the first argument is of type id and called self, the second argument is of type SEL and is called _cmd and subsequent arguments are the method arguments. This is where the self and _cmd variables are declared when you use them inside a method. Below is an example of a method and a potential IMP for it:

- (id)doSomethingWithInt:(int)aInt {}

id doSomethingWithInt(id self, SEL _cmd, int aInt) {}

Other Runtime Functionality

Now we know about objects, classes, selectors, IMPs and message sending, what is the runtime actually capable of? Well it really serves two purposes:

  1. Create, modify and introspect classes and objects

  2. Message sending

We've already covered message sending, but this is only a small part of the functionality. All the runtime functions are prefixed by the item they work on. Below is a prefix and some of the more interesting functions they contain:

These are functions for modifying and introspecting classes. Functions like class_addIvar, class_addMethod, class_addProperty and class_addProtocol allow for building up classes. class_copyIvarList, class_copyMethodList, class_copyProtocolList and class_copyPropertyList give all the items of each type in a class, whereas class_getClassMethod, class_getClassVariable, class_getInstanceMethod, class_getInstanceVariable, class_getMethodImplementation and class_getProperty return individual items that match the supplied name.

There are also some more general introspection functions which are often wrapped by Cocoa methods, such as class_conformsToProtocol, class_respondsToSelector and class_getSuperclass. Finally you can use class_createInstance to create an object from a class.

These functions let you get the name, memory offset and Objective-C type encoding of an ivar.

These functions largely allow for introspection, such as method_getName, method_getImplementation, method_getReturnType etc. There are some modification functions though, including method_setImplementation and method_exchangeImplementations, which we will cover later.

These are general runtime functions and are in effect the root of the heirarchy. You have the various objc_msgSend functions for handling the core message sending functionality. There are also objc_getAssociatedObject, objc_setAssociatedObject and objc_removeAssociatedObjects, which obviously handle associated references. As the root of the runtime functions, you can get the classes and protocols in the runtime using objc_copyProtocolList, objc_getClassList, objc_getProtocol, objc_getClass etc,

Finally we have the methods for creating classes. objc_allocateClassPair creates the class and metaclass pair, allowing you to add methods, ivars and protocols. objc_registerClassPair puts these into the runtime, locking them down somewhat so you can only add new Methods.

Once you have your objects you can perform some introspection and modification on them directly. You can get and set the ivar values of an object. Using object_copy and object_dispose you can perform copies and free the object's memory. Most interesting though is the ability not only to get a class, but to use object_setClass to change an object's class at runtime. We'll see how this is useful later on.

Properties store quite a bit of data with them. On top of getting the name, you can use property_getAttributes to find out things such as a property's return type, whether it is atomic or non-atomic, the memory management style used, custom getter and setter names, whether the property's implementation is dynamically implemented, the name of the ivar backing the property and whether it is a weak reference.

Protocols are a bit like classes, but cut down, and the runtime methods are the same. You can get the method, property and protocol lists of a protocol and check whether it conforms to other protocols.

Finally we have some functions for dealing with selectors, such as getting the name, registering a selector name and checking selector equality

Now that we have a grasp of what the Objective-C runtime can do, and how it does some of it, lets look at some of the really interesting dynamic programming techniques these enable.

Classes And Selectors From Strings

One of the most basic dynamic things we can do is to generate classes and selectors from strings. We do this by using the NSClassFromString and NSSelectorFromString functions in Cocoa. It's pretty easy to do:

Class stringclass = NSClassFromString(@"NSString")

That give us a class that we can send messages to. So next we could do:

NSString *myString = [stringclass stringWithString:@"Hello World"];

So why do this? Surely it's easier to use the class directly? Well usually it is, but there are some very useful cases where we can use these functions. The first is for testing whether a class exists. NSClassFromString will return nil if there isn't a class in the runtime that matches the string. For example, you could check whether you are on iOS 4.0 by checking whether NSClassFromString(@"NSRegularExpression") is nil or not.

The other way they can be used is choosing the class or method to use based on some input. For example, say you are parsing some data. Each data item has a value string to parse and a type it represent (String, Number, Array). You could handle all those in one method, but you could also handle them in multiple methods. One way to do this is to read the type and use an if statment to call the corret method. The other is to use the type to generate a selector and call it that way. Here are the two different approaches:

- (void)parseObject:(id)object {
    for (id data in object) {
        if ([[data type] isEqualToString:@"String"]) {
            [self parseString:[data value]]; 
        } else if ([[data type] isEqualToString:@"Number"]) {
            [self parseNumber:[data value]];
        } else if ([[data type] isEqualToString:@"Array"]) {
            [self parseArray:[data value]];
- (void)parseObjectDynamic:(id)object {
    for (id data in object) {
    	[self performSelector:NSSelectorFromString([NSString stringWithFormat:@"parse%@:", [data type]]) withObject:[data value]];
- (void)parseString:(NSString *)aString {}
- (void)parseNumber:(NSString *)aNumber {}
- (void)parseArray:(NSString *)aArray {}

As you can see, you can replace 7 line if statement with a single line. And the benefit is that if you need to support a new type in the future, you just add the new method, rather than have to remember to add an extra else if to your main parse method.

Method Swizzling

Earlier on we talked about how methods are made up of two components. The selector, which is an identifier for a method, and the IMP, which is the actual implementation that is run. One of the key things about this separation is that a selector and IMP link can be changed. One IMP can have multiple selectors pointing to it for example.

Another thing you can do is Method Swizzling. This is where you take two methods and swap their IMPs. Again, you may be asking "why do I want to do something like that?". Well lets look at the two obvious ways of extending a class in Objective-C. The first is subclassing. This allows you to override a method and call the original implementation, but it means that you have to use instances of this subclass, which can cause problems if you're subclassing a Cocoa class and Cocoa returns it (eg NSArray). In those cases you want to add a method to NSArray itself, which is where categories come in. For 99% of cases this is great, but you cannot call the original implementation if you override a method.

Method Swizzling lets you have your cake and eat it. You can override a method without subclassing AND call the original implementation. You do this by adding a new method, usually via a category (but it can be in a completely different class). You then exchange the implementations, using the method_exchangeImplementations() runtime function. So lets look at a concrete example to this, and override the addObject: method on NSMutableArray to log any objects that are added.


@interface NSMutableArray (LoggingAddObject)
- (void)logAddObject:(id)aObject;

@implementation NSMutableArray (LoggingAddObject)

+ (void)load {
    Method addobject = class_getInstanceMethod(self, @selector(addObject:));
    Method logAddobject = class_getInstanceMethod(self, @selector(logAddObject:));
    method_exchangeImplementations(addObject, logAddObject);

- (void)logAddObject:(id)aobject {
    [self logAddObject:aObject];
    NSLog(@"Added object %@ to array %@", aObject, self);


So the first thing to note is that we're exchanging implementations in the load method. This method is called on every class and category only once, as it is loaded into the runtime. If you're wanting to exchange implementations for the entire lifetime of a class, this is the best place to put the code. If you only want to do this temporarily, you can put it wherever works best.

The second thing to note is the apparent infinite recursion in logAddObject:. This is one of the disadvantages of Method Swizzling, in that it can mess with your brain a bit if you forget methods are swizzled. The important thing to remeber is this: everything between the { } is the IMP, everything before is effectively the Selector. Normally the Selector and IMP match up like in code, but if you swizzle, then the Selector actually points to another IMP. This diagram will hopefully clear it up.

Method Swizzling Diagram

Dynamic Subclassing/isa Swizzling

As we covered when looking at the runtime functions, you are able to create new Classes from scratch at runtime. This feature isn't used too often, but it is very powerful when it is used. It can allow you to create a new subclass, with some additional functionality.

But what use would such a subclass be? Well it's important to remember the key thing about an object in Objective-C: it has a variable called isa which is a pointer to its class. This variable can be changed, effectively changing the class of an object without needing to recreate it. Now it isn't quite so simple as you can't really mess around with ivar layout, but you can add new ivars and new methods to an object. To change the class of an object, you just do the following:

object_setClass(myObject, [MySubclass class])

An example of where this is used is Key Value Observing. When you start observing an object, Cocoa creates a subclass of the object's class, and then sets the object's isa pointer to the new subclass. For a more complete explanation, check out this Friday Q&A by Mike Ash.

Dynamic Method Resolution

We've so far looked at swapping things arround and dealing with things that are there. What happens when you send a message to an object that doesn't respond to it? The obvious answer would be "it breaks". While true in most cases, there are actually a series of steps that Cocoa and the runtime go through that allow you to perform some tricks, before it finally gives up.

The first of these steps is Dynamic Method Resolution. Usually, when resolving a method, the runtime looks for a method matching a selector and invokes it. Sometimes, you don't want a method to be created until runtime, as maybe there is some information you need at runtime before it can be made. Either way, the first time this method is needed, you generally want to be told so you can provide an implementation.

To do this you need to override the +resolveInstanceMethod: and/or the +resolveClassMethod: methods. These methods get called and the selector for the required method is passed in, allowing you to add the method to the class. If you do add a method you should be sure to return YES, so that the runtime doesn't move on to the next step. A simple implemention would be:

+ (BOOL)resolveInstanceMethod:(SEL)aSelector {
    if (aSelector == @selector(myDynamicMethod)) {
        class_addMethod(self, aSelector, (IMP)myDynamicIMP, "v@:");
        return YES;
    return [super resolveInstanceMethod:aSelector];

So where is this currently used in Cocoa? Well Core Data uses this quite a bit. NSManagedObjects have properties added to them at runtime for getting and setting attributes and relationships. Now these could be generated in code prior to compilation, but then you're hit with the problem of, what if the model is different at runtime? Afterall, the model can be changed.

Message Forwarding

If the resolve method returns NO, the runtime moves onto the next step: Message Forwarding. Whereas Dynamic Method Resolution is about creating methods at runtime, Message Forwarding is about re-routing messages. There are two main uses for this. The first is to pass a message on to another object that can handle it. The second is to route several messages to a single method.

Message Forwarding is a two step process (at least on the latest OS versions). Firstly, the runtime calls -forwardingTargetForSelector: on the object. If you're only wanting to pass a message to another object as is then you should use this method, as it is much more efficient. If you're wanting to modify the message prior to forwarding however, you want to use -forwardInvocation:. In this case the runtime packages up the message in an NSInvocation and sends it to you to handle. After you have handled the NSInvocation object, you simply call -invokeWithTarget: on it, passing in the new target. The runtime will sort out passing the return type back to the calling object.

There are several places Message Forwarding is used in Cocoa, but the two key places are Proxies and the Responder Chain. An NSProxy is a lightweight class, who's intended purpose is to forward messages on to a real object. This is useful for if you want to lazily load part of an object graph, or if you're using something like Distributed Objects where the object you actually want to call is potentially on another computer. It is also used by NSUndoManager, but in this case to intercept messages to invoke later, rather than for forwarding them to something else.

The Responder Chain is how Cocoa deals with sending events and actions to the correct object. Take for example, performing a copy by hitting Cmd-C. This sends a -copy: message down the Responder Chain. It initially goes to the First Responder, usually the active UI element. If that doesn't handle it the message is forwarded to the -nextResponder. It continues down the chain until it finds an object that can handle the message, or until it reaches the end of the chain, in which case it causes an error.

Using Blocks As Method IMPs

iOS 4.3 brings a lot of cool new runtime functions. On top of increased powers with properties and protocols comes a new set of functions beginning with the prefix imp. Normally an IMP is a pointer to a function where the first two arguments are an object (self) and a selector (_cmd). iOS 4.0 and Mac OS X 10.6 gave us these rather nifty new things called blocks though. These imp prefixed functions allow us to use a Block as an IMP for a method, specifically using the imp_implementationWithBlock() function. Here's a quick snippet showing how you can use this to add a method using a block block.

IMP myIMP = imp_implementationWithBlock(^(id _self, NSString *string) {
	NSLog(@"Hello %@", string);
class_addMethod([MYclass class], @selector(sayHello:), myIMP, "v@:@");

For more information on how it works, check out this great post by Bill Bumgarner.

As you can see, there is an awful lot of power in Objective-C. While it seems simple on the surface, it provides a lot of flexibility, which can enabled a lot of cool possibilities. The key advantage a highly dynamic language has is the ability to do a lot of stuff without having to actually extend the core language. A good example is Key Value Observing, which offers an elegant API that works with existing code, without the need for new language features or to heavily modify existing code.

Hopefully this post has given you a deeper understanding of Objective-C and opened your eyes to some of the possibilities it allows when designing your applications.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Patent Trolls & Devil's Advocates tag:pilky.me,2011:view/20 2011-05-16T00:00:00Z 2011-05-16T00:00:00Z Martin Pilkington pilky@mcubedsw.com So if you're in the tech world you may have heard that several small iOS developers have been sent legal papers by what is commonly known as patent troll, asking for money for a patent that apparently covers In App Purchasing. There are lots of questions, people are angry and as usual patents are being berated.

Patents: The Good

So what are patents? Well their intent is to encourage investment in research and development, by providing a means to prevent others from copying your work without having to pay anything. On the face of it they're potentially great things. A company can spend millions, if not billions, on researching something that vastly improves their products, and their competitors cannot simply copy it and get the benefits without the expenditure.

Now a lot of people seem to think that patents are for stifling innovation, they're a "only we can use this" sort of thing, but often patents are licensed to others, so that others can use the invention, while the inventors get rewards. A good example of this is the MPEG patent pool.

Ultimately patents are meant to be like copyright for inventions. This blog post is copyrighted to me, and I get that for free. But copyrighted things are usually easier to create. Almost anything I can create can be copyrighted, and I can create a lot of stuff with minimal money. It costs far less to write a blog post, a book, a song, a piece of software, take a photo, paint a picture etc than it does to research and develop a new invention. Patents are intended to provide a stronger protection for these more costly things.

Patents: The Bad

Unfortunately, while patents are great and highly beneficial in theory, in practice they have become a hinderance, especially in the US. Patents are far too easy to acquire and they can be awarded for incredibly trivial things. This is a really big issue in software, where people come up with obvious stuff and get a patent on it.

The problem here is, that people are able to patent ideas rather than inventions, or rather are able to class ideas as inventions. This is a very dangerous system as someone can just sit there, think something up in 5 minutes, patent it and then sit back and rake the money in, without actually doing anything. Meanwhile those who actual do something worthwhile are left at a disadvantage.

As an example, lets take the Lightbulb. So we're developing the first lightbulbs. It is taking a lot of time and effort and money to build these lightbulbs, test what makes the best filament, what is the best shape for the bulb, what is the best fixture to use, what is the best gas to put in. Eventually we get to the best combination and we go and patent it. Such things are what patents were made for, you've worked out the best way to do something by putting in lots of time and money, and you would like to protect that investment.

Unfortunately, someone else came along a bit early and patented the idea of a light source that is powered by electricity. Now such a thing seems far too broad. Above, we have developed the incandescent light bulb, something specific that takes work. This idea took no work and covers all types of lightbulbs that could be made. This is damaging to innovation.

Ideally if you're patenting something, you should be required to show a prototype, to show that you have been putting time and effort into refining something, rather than just some idea you thought up.

Patent Trolls

Patent trolls are people or companies, who do nothing but buy up patents with the sole intent of going after others. Now these should be made distinct from patent pools, which are often organisations who manage patents from several companies who make stuff, to make it easier to licence patents. Patent trolls often fully own the patents and do nothing with them, and usually didn't even file them in the first place.

In an ideal patent world these types of organisations shouldn't exist. If it cost more to get a patent, in terms of time and money spent developing the invention, then they wouldn't be able to. Also, if patent law place restrictions on selling patents, maybe stating that after purchasing a patent you have to actively use it or you lose the patent, it would stop patent trolls as they would then have to make products.


So finally onto Lodsys. Yes they are a classic patent troll. They make nothing, all they really own are patents and they're going after people asking for money for patents that really shouldn't be patents at all. I mean honestly, patenting the idea of making payments within an app? If it was some specific method of making payments, that offered great benefits and took a lot of time and effort to develop I could understand, but we're talking about the idea of making a payment within an app.

Now this is where I'm going to play Devil's Advocate. Under the law, this patent exists, Lodsys owns it and they are therefore entitled to seek fees for its use. As much as we may disagree with it in principle, it's how things are. Now they could do this a variety of ways, and often patent trolls file infringement lawsuits asking for millions, if not billions in compensation. And they cannot be counter sued because they make nothing so can't infringe on anything.

Lodsys however, have been somewhat more reasonable. They aren't sending out cease and desist letters. They aren't taking people to court (yet). Apparently what they have sent out is a notice saying that they have this patent, they would like to start collecting licensing fees for it.

They could also ask for massive lump sums in order to licence the patent, that could put many small developers out of business. Instead they say they are asking for 0.575% of US revenue, which for most devs would amount to a few 10s of dollars a year, if that. Nobody wants to pay that, obviously, but similarly nobody is going to go bankrupt from it.

I'm not saying that I in any way like what is going on. The reality of the situation sucks, but it is also sadly unavoidable. These are things that shouldn't really be patentable in the first place, but they are and we have to deal with that. I may dislike what Lodsys represents, and that laws in various countries allow them to run such a business, but from all appearances so far, I cannot claim they are evil or unreasonable.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Fallacy Of Equality At Conferences tag:pilky.me,2011:view/19 2011-04-28T00:00:00Z 2011-04-28T00:00:00Z Martin Pilkington pilky@mcubedsw.com I often like reading "translation" posts, where someone takes what someone said and puts it into the terms everyone was thinking. I also like reading posts regarding the problem of equality and representation of people at conference and the lack of people who aren't straight, white men in the community, and how we can possibly solve it. I also have a lot of respect for Faruk Ates (@KuraFire on Twitter). So it was a dissapointment this morning to read a "translation" style post by Faruk this morning on the topic of how to increase the participation of minorities at tech and design conferences, which I almost entirely disagreed with and considered crass and unhelpful.

For those who can't be bothered clicking the link (though I strongly recommend you do), here's a brief overview. Apparently a big discussion took place last night on Twitter between Mike Monteiro and some others. Apparently Mike stated that conferences that have only white, male speakers is unacceptable and they MUST have female and black speakers. There was some back and forth on Twitter and then some people wrote some blog posts. Faruk took it upon himself to "translate" these posts, but sadly he seemed to completely miss the mark on many of the quotes.

Now there was some silly things said by the people he was quoting. For example

Are you retarded? How many black swimmers do you know? How many white 100m sprint runners? How many female fighter pilots?

While at first glance it's a reasonable point, it ends up being worth remembering two things. The lack of female fighter pilots is likely influenced by discrimination against women, both preventing those who do serve and putting off those who may want to. So not really equivalent to conferences. And the swimmers vs sprinters argument. That is down to biology. Last time I checked the reason was that this was all down to bone density, apparently black people have lower bone density than white people. Lower bone density is an advantage for running but a disadvantage for swimming, which is why you see white people dominate swimming and black people dominate running. (UPDATED: While I've not see stuff to disprove this, in the time since writing this post I have seen evidence that it is more down to culture than biology, though not necessarily discrimination. Regardless, the point still stands that the above quote was misguided). But again, I've seen nothing suggesting that the biology of someone would affect their ability to speak at a conference, just because of their gender, race or sexuality.

Now I'll briefly like to focus on what Faruk said. As I've previously stated, I respect him. But in this post he is not arguing about equality, no matter how much he may like to try and make it seem that way. He's simply arguing for a different form of discrimination that is more preferable to his world view. Whenever one of the people he quotes talks about equality and choosing people based on talent, he is quick to throw out the racism or sexism cards. This is a shame.

Quality Or False Equality

To deny there's a problem is crazy. There are too few non-straight, white, male speakers at conferences. This is something that needs addressing so there are more role models for people and so we can increase participation. But there is another problem. Too many people are look in the wrong places or assuming evil where there is none.

Let us get some facts straight. Women are a minority in our community. Black people are a minority in our community. Gay people are a minority in our community. Now within every group there will be a subset of people who are both good at and want to speak at conferences. And on that subset there is another subset of people who are available to talk at Conference X due to time constraints, location etc. This means there is a smallish pool of speakers and given how many conferences there are they are in high demand.

So that's a good thing, we want high quality speakers. But the problem is that, as there is nothing to really say that one group is better than another at speaking, this means that the pool of good, willing speakers has likely the same makeup as the community at large. Now everyone is trying to get the minority groups to speak at their conference as they're wanting to try and solve this issue. The problem is they can't be at every conference, even if they went full time. This means that a conference organiser may be faced with two choices:

  1. Go to the larger pool of good, willing speakers who happen to be straight, white and male

  2. Go to the larger pool of bad or unwilling speakers who happen to be of a minority group

So your choice is, pick someone because they have a different colour of skin or because they have a vagina, or pick someone because they are the best person for the job. So what do you want, quality or a false sense of equality? Personally I would prefer quality.

This all reminds me of a quote from The West Wing. C.J. is asked why she doesn't agree with positive discrimination and gives the following quote:

After my father fought in Korea, he became what this government begs every college graduate to become. He became a teacher. And he raised a family on a teacher's salary, and he paid his taxes and always crossed at the green. And any time there was opportunity for career advancement, it took him an extra five years because invariably there was a less qualified black woman in the picture. So instead of retiring as superintendent of the Ohio Valley Union Free School District, he retired head of the math department at William Henry Harrison Junior High.

Discriminating against someone based on their race, gender, sexuality etc is wrong, be it positive of negative for them. One could argue that the reason straight, white, males have so much power isn't because of negative discrimination against others, but positive discrimination towards straight, white, males. Discrimination based on what you are always causes resentment with others and can cause unease with you. If a female dev gets a speaker role simply because she's a woman, then more capable male speakers will likely resent that person. She's there because her reproductive organs are on the inside. Likewise the female speaker may feel unease at her capabilities. Is she really capable enough or is she there just because she's a woman?

Don't Treat The Symptoms, Treat The Cause

The lack of minority speakers at conferences isn't the cause for the fact that these groups are minorities. There is a lot of emphasis put on it though as being something we must solve. Personally I don't think we should put quite as much effort into it. As long as conference organisers are contacting these minorities groups as much as the majority group, and are asking them because they believe them to be high quality speakers, then that's fine. If all the minority speakers they contact can't do it but they find enough white, male speakers who can then great, you've got a conference with N high quality speakers.

Claiming that you cannot have conferences where all the speakers are white men is incredibly impractical. I mean to get that enforced you'd need to either say that any woman, black person or gay person who is asked to speak at a conference MUST accept, or to say that any conference that doesn't have at least one of those cannot go on. Why ruin something for a large group of people simply because those on some self righteous guilt trip can't be satisfied with the reality of the situation and don't want to tackle the real problems.

We need to treat the cause of the problem. All groups are equally good at speaking, so the only way to increase the numbers of the speakers in minority groups is to increase the number of community members in those minority groups. We should be trying to get more female, black, gay etc developers and designers. When we have more of them we have more chance of finding good, willing speakers in those groups and so will increase the amount of speaker slots filled by them.

The fact is that conferences don't really inspire people to get into this industry. They may influence a few people but you often don't learn about conferences until you're already part of the community. What gets people interested is the stuff outside of conferences. First and foremost parents help. If my dad hadn't brought home a computer when i was 3 and let me play with it I would not be where I am today. They also bought me a lot of the greatest toy ever made for inspiring creativity: Lego. We already know that toys help children learn. There is also evidence that the bias in toys between girls and boys affects their outlooks on life. I mean what are the bulk of toys for girls? Baby dolls, toy cookers, toy cleaners etc. Is it any surprise people grow up and inherently assume women will be the child carer, chef and cleaner?

We also need to get into schools. Go and speak in a school and open their minds to the possibilities of being a developer or a designer while they are young. If you can get them interested they'll start looking into stuff on their own. Show them how cool and interesting what we do is and show them how to get started and where to look.

Make sure to write blog posts and guides and offer help to those who ask for it. Maybe take someone under your wing and mentor them. Just as much as you want to give to the community, give to those outside of the community who may want in. It is thanks to the help and recommendations of several people down the line that led me to where I am now. They got me interested in Mac dev and the possibility of going to university to do Computer Science. That got me to choose what A levels I would take and helped me get round to starting my company and selling software.

And finally, build cool stuff. Make beautiful, spectacular and amazing applications and website and designs or whatever it is you make. And then show people how you made them. I put my interest in creating apps down to two pieces of software that came with a Performa 5200 my family got when I was 7: Myst and ConcertWare. These two apps were eye openers for me. And best of all, Myst came with a making of, so I could see how they did it and I could try and do it myself. Down the line I started going to the library and looking at programming books and everything spiraled from there.

The fact is, it is a shame that there aren't more female, black, gay, etc speakers at conferences. It's also a shame there aren't more attendees of those groups at conferences. And that there aren't more bloggers of those groups in the community. And that there just aren't more devs and designers of those groups in the community. But these are all ultimately symptoms. We can try to treat them all we like but we'll still be stuck where we are now. We need to attack the cause. I can tell you one thing for certain: there will be more people of all backgrounds inspired to join our community due to things like Facebook, Twitter and the App Store than there ever has been due to a conference.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Handling Concurrency tag:pilky.me,2011:view/18 2011-03-20T00:00:00Z 2011-03-20T00:00:00Z Martin Pilkington pilky@mcubedsw.com We all hear about how we need to make our applications more concurrent. There is no longer a free lunch for software developers, where processor cores will get faster, giving us a performance boosts for free. Instead we need to try and run more code in parallel.

Unfortunately we also hear about how hard concurrency is. It isn't something you can just throw in easily. Concurrency is hard to conceptualise and incredibly hard to debug given the time-sensitive nature of many bugs. So you have to deal with locks to try and prevent things like race conditions. Basically it's a scary ball of complexity that keeps most developers away.

Thankfully, it doesn't have to be quite so scary. If you look at concurrency you realise that it isn't a multitude of problems but one problem that causes all the hurt: data mutability. And by thinking about concurrency in a new way you can simplify and to a large degree eliminate these problems, removing a lot of the need for things like locks in your own code, which can cause performance issues.

Data Mutability

So what is the problem with data mutability and concurrency? Well let me give a simple example. Imagine the following block of code:

int i = 0;

void increment() {
    int locali = i;
    locali += 1;
    i = locali;

A simple enough thing, every time you call increment the value of i is incremented. So if we were to run it twice and then print out i we would get 2. Now lets say we run it twice concurrently, we could very well get 2. But we could also occasionally get 1 as the output. Why is this? Well lets see what happens with the two function calls "inc1" and "inc2" in serial and parallel.

When run serially they run like so:

  • inc1: store i in locali
  • inc1: increment locali
  • inc1: set i to locali
  • inc2: store i in locali
  • inc2: increment locali
  • inc2: set i to locali

However, when run in parallel this might happen:

  • inc1: store i in locali
  • inc1: increment locali
  • inc2: store i in locali
  • inc1: set i to locali
  • inc2: increment locali
  • inc2: set i to locali

The second call of increment starts before the first one has finished, after locali has been incremented, but before i has been set to the new value. This means that when the second call gets the value of i, it is still 0.

The Solution

So the issue here is reading and writing data in variables being out of sync. When you start reading to or writing from a variable on multiple threads all hell breaks loose. There are 2 simple solutions to this problem though.

The first is to get rid of variables. This is an approach taken up in functional languages. Functional languages are very big on immutability of data. Take an array for example. You wouldn't append or insert an item into the existing array, but create a new array containing the appended or inserted item. This immutability means that you can't have two different threads modifying one object at the same time, as no thread can modify it. This eliminates so many issues with concurrency, but we can't just remove mutability from every language/API as it is still very useful in many situations.

The second is to conceptualise a background task as a black box. You pass in some data, it works on that data and it returns some new data. It knows nothing about the rest of your application and does not mutate any application data. All data reading and writing is done in chunks. Essentially you follow 3 steps:

1. Read all the data you need
2. Work on that data
3. Write the result of the work

1 and 3 would usually be on the main thread and 2 is what happens on the background thread. This is basically how things such as pixel shaders work on graphics cards. When you write a shader you are writing step 2. As it is self contained, many instances can be run concurrently very easy with no fear of conflicts.

Take for example inverting the colours of an image, where every pixel is modified independently. So lets assume that operation takes 0.01 seconds per pixel and we have a 10,000 pixel image. If we did it sequentially it would take 100 seconds, but if we had 100 cores we can do the same task in 1 second. And you don't need to write any special code to handle concurrency as each shader instance only cares about the single pixel it is supplied.

Concurrent Cocoa

So how do you handle this in Cocoa? Well thankfully we have a class that is designed this very task: NSOperation. NSOperation is essentially a class for creating a self contained work unit. Operations are put on a queue which then handles all the implementation details of how many threads to run and which thread to run the operation on. Much like the pixel shader example above, it abstracts away many of the implementation details. A simple example of this is below. This uses NSData but it can be any immutable data type (and before you say anything, I'm using Garbage Collection in this example, so NO it does not leak):

NSData *someData = [iVarData copy];
[myOperationQueue addOperationWithBlock:^{
    NSData *newData = //result of some processor intensive action with someData;
    [[NSOperationQueue mainQueue] addOperationWithBlock:^{
        iVarData = [newData mutableCopy];

What we've done is copied our mutable ivar into a local variable. This is then used within the outer block to perform some intensive work and the result it put into newData. Once it is done we then put a new block on the main queue which will set the value of the ivar to our new data. Very simple and doesn't require any sort of locks in your own code.

Now this example can still cause issues, but it is easier to manage them. For example, while it eliminates the problem of potentially reading in invalid data, if you added the operation twice they wouldn't necessarily run one after another. Thankfully NSOperation allows you to make it so they do, by making one operation depend on another finishing. You can also subclass NSOperation to allow for more complex operations, which I would recommend if you need to pass a lot of data in.

Concurrency isn't simple, but it needn't be very hard either. By conceptualising background code as black boxes, which know about nothing but the data you pass in, and by making sure that data is immutable, you can remove a massive amount of issues. By working this way you can eliminate almost all locks in your own code as you will never have two items reading or writing a value at the same time. It also simplifies your code as you can draw the lines between what runs in the background and what on the main thread at object boundaries.

So just remember these few rules:
1. Only pass immutable data across threads
2. Treat background tasks as black boxes
3. Only mutate data on a single thread (usually the main thread)

Follow those and most of your concurrency problems vanish. It doesn't work for every situation, but for most of the concurrent code an application developer will need to write it works fine.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

All About Schemes tag:pilky.me,2011:view/17 2011-03-11T00:00:00Z 2011-03-11T00:00:00Z Martin Pilkington pilky@mcubedsw.com Schemes are one of the most interesting new things in Xcode 4, but also one of the hardest to get your head around at first. This guide will help you understand what schemes are and why they are useful.

What did we have?

Xcode 3.2 had an overview pop up, which let you set the various settings involved in building and running your app. You could set the SDK to use, the build configuration, the target to build and which executable to run and also where and how you want to run the app. Unfortunately this led to many different combinations. In the image on the right alone there are 189 different possible combinations. Some projects could potentially get even more complicated.

There was even more complexity though. You could build, build an analyse, build and archive. You could run, or run with a performance tool. You had various debugging aids such as Guard Malloc that you had to remember to enable and disable depending on what you were doing, and various changing launch arguments and environment variables. Basically in order to manage everything you had to go to about 4 or 5 different places. And you would have to remember to change every setting for each different item (I have forgotten to change the active configuration to my App Store config when doing a build and archive many times).

What is a scheme?

Obviously this setup isn't ideal. You only have a few combinations of settings that you'd like to change. You almost always want to use a release build for archiving and profiling and a debug build for running, archiving and testing. Those are all distinct tasks, each with their own setup. You may also want different setups depending on what part of the app you're working on. For one particular part of the app you may want certain environment variables and tests to run. You may also want different arguments or environement variables set depending on whether you're running, testing or profiling.

Schemes essentially clump all these together, meaning you need to spend less time worrying about getting the right configurations. At its most basic a scheme is an environment in which to build, run, test, profile, analyse and archive your code.

Managing schemes

The most basic part of the scheme UI is the schemes menu. It is from here that you select the active scheme (on the left) and launch destination (on the right). For example, in the image on the left I have the Test Project scheme selected and it will launch on my Mac in 64 bit mode. On iOS the launch destinations should list the simulator and any development ready devices you have plugged in.

Below this you have a few options to edit the current scheme, create a new scheme and manage schemes. Editing schemes will be covered in the next section, but for now we're going to treat them as black boxes and see how we can manage them.

If you select the manage schemes option you will get a sheet pop up similar to the one below. By default Xcode automatically creates schemes for you based upon the targets in your project/workspace. Now you don't necessarily want or need all of those visible. In the example project the framework and helper tool will be built when we build the projects and the unit test bundles don't need to be built separately in Xcode 4, so we can hide all of those schemes (we could potentially just delete them as well).

Schemes can be contained either in a project or a workspace. The latter allows you to have schemes that only apply to certain groups of projects. It is also important to note that by default all schemes you create are personal to you, if someone else opens the project or workspace then they will not see your schemes (though Xcode will autocreate schemes for them). If you would like a scheme to be visible to everyone who opens a project of workspace then you can mark it as shared.

This setup is incredibly powerful, especially when working in groups. You can have shared schemes that cover things like final release build which need to be the same across all devs, but your own set of personal schemes for use during development which others may not need to see. And if you never need to used the shared schemes yourself can just hide them for you.

Finally, you can also export schemes and send them to someone else, who can then choose to import them, much like external build configuration files.

Editing schemes

Editing a scheme will bring up a different sheet. Along the top are controls similar to in the toolbar for selecting the scheme and destination to use and whether breakpoints are enabled. On the right are the main sections and on the left the details.

The UI for the various sections is somewhat confusing, and the icons don't do much to help initially I thought they were steps that followed on from one another, like target build phases. That isn't actually the case though and they represent two distinct steps. The first step is Build, the second step is either Run, Test, Profile, Analyse or Archive, depending on which action you selected. There are independent steps in most cases, so you can do a plain build, build and do a second step, or just perform one of the second steps without building (except analyse and archive which require a build). The UI could really be improved by making this separation between Build and other stages more distinct (rdar://9121706)


The build panel has very few options. There are two basic options: Parallelise Build and Find Implicit Dependencies. The former will build independent targets at the same time and the latter will try to find any dependencies that are implicit in your project. What does that mean? Well say you have a framework that you build in another project and you have a copy of the built framework in your app's project to use for building. If you have a workspace with just your app's project in it works just as you'd expect. If you then add the project file for the framework to that workspace though, Xcode will build the framework from source for you. It's a bit like a cross project reference, but somewhat cleaner as you don't need the project there to build. In fact remove the framework project from the workspace and Xcode goes back to using the pre-built version.

Below is a table that contains the targets to build in this scheme, and when to build them. For example, you may only want your test bundles to build when testing and your frameworks to only rebuild when you do an archive or run. This is useful to help fine tune more complicated projects or workspaces.

You may also be wondering where the build configuration setup is. This isn't actually defined by the build stage, but by the second stage. For example, you usually want the debug configuration to be used for running, testing and analysing, but release for profiling and archiving. As such, you set the configuration in the other sections.


The run panel lets you set up the environment in which to run your application. As well as the configuration, you can also choose which executable and debugger to use, letting you try out LLDB. You can also delay the launch, set a custom working directory and even set the UI scale factor (which gives me hope that we'll soon see resolution independance make a come back).

There are two other tabs as well. The first is arguments, where you can set the launch arguments, environment variables and which modules to load debug symbols for. The second is the diagnostics tab. This is where all the common debug tools can be found, such as guard malloc, exception logging and more.


The test panel is for setting up your test environment. This offers a huge improvement over Xcode 3.2. For starters, you can choose exactly which tests to run. Just uncheck a test to prevent it from running. This is great if you only want a subset of your test suite to run. The other big improvement is that you can debug tests and so can choose which debugger to use: GDB or LLDB.


The profile panel is quite similar to the run panel, except instead of choosing which debugger to use, you choose which instrument template to use.


You can't do much in Analyse bar change the build configuration.


Finally, archiving provides basic settings for how to archive your app. You can choose the name to use and whether to reveal the final archive in the organiser.


You may notice that each scheme stage has a disclosure triangle next to it. This is because you can set up actions to run before and after a stage. There are only two sorts of actions you can add: run script and send email. These can be useful for integrating with various build systems to let them know about the completion of a stage.

Schemes are rather simple and elegant once you get your head round them. They allow you to focus more on writing your code than constantly setting up your environment. The less you have to set up the fewer mistakes you will make. Hopefully this guide has been helpful to you understanding schemes, but let me know if you have any unanswered questions and I'll try to update it to answer them.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Where The Fuck Is _ In Xcode 4? tag:pilky.me,2011:view/16 2011-03-10T00:00:00Z 2011-03-10T00:00:00Z Martin Pilkington pilky@mcubedsw.com It can be hard to find your way around Xcode 4 at first, especially coming from Xcode 3. This is a quick start guide for how to find various Xcode 3 items in Xcode 4.

The files & groups view

This is now the project navigator. It is the first tab in the navigator section on the right. It can be accessed by pressing cmd-1.

The debugger

This is now build into the main window and slides up below the editor. It can be shown/hidden using the middle views button in the toolbar

Build logs

These are the last tab of the navigator. It can be accessed by pressing cmd-7


Breakpoints are now managed in the breakpoint navigator (the 6th tab). It can be accessed by pressing cmd-6

The ability to add existing frameworks

Go to the build phases for your target and open the Link Binary with Libraries. Press the + button to show the chooser.

Command-Option-Double click support

It's gone. Use the quick help and then click for full documentation. Bug filed: (rdar://8689104)

Editor Splitviews

Switch to the assistant editor to access splitviews. Option clicking on a file (or a symbol in code) will open it in the secondary editor pane

Such and such a keyboard shortcut

The keyboard shortcuts have (rather annoyingly) changed in Xcode 4. Colin Wheeler has updated his brilliant Xcode keyboard shortcut guide for Xcode 4

3rd party editor support

It's gone. Unlikely to return.

Argument/environment settings

Edit the current scheme and choose the argument tab from the Run, Test or Profile sections to set up arguments/environment variables for those tasks

The build config chooser

Choose the config to use for running, testing, profiling, analysing and archiving from the edit scheme sheet.

Target/project settings

Select the project in the project navigator

Toolbar customisation

It's gone. Unlikely to return.

Perforce/CVS support

It's gone, along with mullets and mobile phones the size of bricks.

The Preprocess/show assembly options

It's gone. File a bug if you want it back.

IBPlugin support

Editing support is gone. Compiling support is available if you have Xcode 3 installed and plugins set up in IB 3. To edit use IB 3. May return eventually.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Xcode 4: the super mega awesome review tag:pilky.me,2011:view/15 2011-03-09T00:00:00Z 2011-03-09T00:00:00Z Martin Pilkington pilky@mcubedsw.com So it is finally here. Xcode 4 has been released into the world and we are now allowed to talk about it. As my review of Xcode 3.2 went down really well I thought I would have a go at reviewing Xcode 4 in depth. I'll also be publishing other posts over the next few days going in to some of the bigger changes since 3.2 in more detail and hopefully helping you migrate. I've also put in radar numbers for all bugs and feature requests, so you can file duplicates or so any of the Xcode dev team reading this can find them. So without further ado, what is new in Xcode 4?

User Interface: it's all new and shiny!

The first thing that will hit you when you launch Xcode 4 is the UI is quite a bit different. The toolbar is sparse, there's some iTunes-esque LCD display in the middle and everything (code editor, project navigation, inspectors, debugger etc) is contained in a single window. On the whole, the new UI is very clean though it takes a while to get used to. Of course while those coming from Visual Studio, Eclipse or even Xcode 3's all-in-one mode will feel right at home, those who preferred the condensed mode may find it hard to adjust.

Even though some may be uncomfortable, the decision to settle on an all-in-one UI does allow for many interesting and exciting improvements to other parts of the UI. I'll cover some of these now but a few require much much detail and so are covered in later sections.


In Xcode 3 a project was the top level entity you dealt with. Your project contained your files, your targets, your executables and sometimes even references to other projects. Xcode 4 introduces a new top level entity: the workspace. Every window with an editor in Xcode 4 is a workspace. It can be as minimalistic as a text editor in a window or packed with a UI editor, find results panel, document info, media library and the debugger.

So why do we need a new top level object and what does it bring us? Well the answer is simple: multiple-project support. It is rare that you'll only ever have one project on its own with a large application. You may have plugins or frameworks that are in their own projects. Unfortunately you can't easily access the files of one project from another in Xcode 3, even if you have a cross project reference. Workspaces allow you to have several projects in one window. You can view the files of all projects at once (including for cross project references).

But it is even more than just seeing the files. Each workspace has its own unique build folder and its own index. This means that build folders won't conflict (and you never have to remember to trash the build folder again before you send a project folder to someone). It also means that you can perform actions such as find and replace across multiple projects at once, to easily see how a change in your framework will affect the app that uses it.

Workspaces also bring with them one of the new bits of "magic" that Xcode 4 does for you: Implicit Dependencies. Previously, if you wanted to include a 3rd party framework or library, you copy a build of it into your project folder. You then link against that built version and possibly copy it to your bundle. This still works just as you would expect in Xcode 4. However, if you add the project for the library to your workspace, Xcode will detect this and when you next build, it won't use the built version from your project folder but will instead build the framework as a dependency. When you're done you can remove the project from the workspace and go back to using the pre-built version.

Another new feature we get from Workspaces are tabs (finally). But before you start jumping for joy, these aren't tabs like you'd expect to see in every other text editor and IDE on the planet. These tabs are more akin to a web browser, where everything below the toolbar gets swapped out. Now tabs could be great, but at the moment they're kinda lacking (this is a theme you'll see a lot in this review). Being able to set up different UI setups within a workspace could be invaluable. You can have tabs for text editing, NIB editing, debugging, testing etc, which all show only the panels that are needed. The issue is you need to use the mouse to jump to a specific tab. It would be great if you were able to assign key combos to tabs so you can easily flick between them (rdar://8688957). Of course what would be better is if there were tabs for currently open files.

Lastly, workspaces aren't all kittens and fairydust. They have trouble remembering their state between launches. Most annoying is when you double click a file to open in a new window. If you close the main window before this second window, the next time you open the workspace it will be laid out like the second window. This means you need to set everything up again. (rdar://8993579)

The Navigators

Xcode 3 had one of the worst bits of UI of any Apple products. It was called the "Groups & Files" list. Now there were ways to improve it, but it was essentially one view that was trying to do too much. It was a file manager, target browser, find results list, SCM controller, symbols list, breakpoint manager and more. Thankfully, Xcode 4 does away with the Groups & Files clusterfuck and replaces it with something infinitely superior: navigators.

There are 7 navigators: Project, Symbol, Search, Issue, Debug, Breakpoint, Log. They're all fairly self explanatory. They let you navigate various aspects of your project in an elegant way. I'm going to focus on just the Project, Search and Debug navigators here.

The Project navigator gives you the bog standard view of your files and groups. It shows your classes, frameworks, resources, products etc. However, there are some nice improvements. At the bottom is a filter bar. This has 3 pre-set filters for recently changed files, unsaved files and files with scm status, as well as a search field with which you can filter files across all projects in the workspace. One issue though is the inability to save filters or flatten results so they don't show groups. Such functionality would replace the smart folders from Xcode 3, which are sadly lacking in Xcode 4. I didn't use too many smart folders, but I had some ones I relied on, in particular one showing all the nibs in my project. (rdar://8993589)

Unfortunately, while it is improved over Xcode 3, file management still has a really fundamental flaw. You can map groups to folders on disk. If you create a new file in a group it is then added to that directory. However, if you drag a file into that group it isn't moved to the directory. It is something that has annoyed me ever since I figured out how to map groups to folders many versions of Xcode ago and is really a fundamental aspect of file management that is still lacking. (rdar://8869962)

The search navigator is the workspace wide Find & Replace. The navigator itself is just your standard find and replace panel and most of the same settings seem to be there, though the find options panel seems to have been replaced by a find scope sheet which is easier to understand. But if it is pretty much the same then why is it interesting? Well there is a cool new button for when you are doing mass replacements: the preview button. Clicking this button slides down a sheet that gives you a diff for every change and lets you select which changes to go ahead with.

The final navigator I want to highlight is the debug navigator. This is where your stack appears when you stop in the debugger. The stack information is much cleaner and easier to reach. You can view multiple threads at once and have Xcode filter out threads that aren't relevant. On top of this though is a really great new feature (that also appears in Instruments) called Stack Compression. At the bottom of the debug navigator is a slider. As you slide from right to left, Xcode takes out stack frames that may not be relevant. It goes from a full stack trace, though just showing where your code calls into or is called by library code, all the way down to showing just the frame where the breakpoint was hit


Inspectors coalesce the various info panels and inspectors of Xcode and Interface Builder. Many of them are context sensitive, so IB inspectors only show when you have a NIB open, the Core Data inspector only shows when you have a data model open. They are very similar to how IB 3's inspectors work so there's nothing new to learn.

There are two ever present inspectors though. The first is the File inspector. This shows details like the location on disk, localisations, text settings, target membership etc. It is great to have a compact overview of a file, in particular the ability to quickly see and change the targets a file is in. The second is the Quick Help. Now Apple has been experimenting with adding context sensitive, compact help since 3.0. We've had the research assistant. We've had the pop up quick help. And now we also have the Quick Help panel.

Of course the issue with all of these is that they're completely useless most of the time. A lot of what is shown isn't relevant and what is shown isn't all that well laid out. It is usually just something to get to the full documentation faster. Now the Quick Help could be incredibly useful in one area: code completion. Visual Studio is excellent with this, displaying documentation for completion options, letting you decide the best to choose. If context sensitive help was added here (and it showed the full docs) then it would be great. (rdar://8993591)


Below the inspectors are 4 libraries: File Templates, Snippets, Objects and Media. Objects and Media are just the IB library equivalents. One thing missing though is the ability to view groups of items in the Object library. Groups are still there in the pop up menu at the top, but it was nice to view all items but seem them still visually grouped. (rdar://8993592)

The File Templates library is quite interesting. As the name suggests it contains file templates, which you can drag to the project navigator to create a new file from the template. Now I say that it is interesting because it is duplicating functionality that already exists and doing so in an inferior way. The only way to create files from these templates is by dragging them with the mouse, and then you get no options other than naming them. You also have no sort of grouping beyond iOS vs Mac, granted there is a filter list.

It puzzles me why this feature exists, given that it is just as fast to go through the File > New panel, which you can do entirely from the keyboard and offers extra options and a better layout. It feels also as though the Xcode team realised they needed libraries for IB but wanted to justify a library section being visible outside of NIB editing and so added a file template section.

What does justify the library being around is the code snippets section. This allows you to create small code snippets that can then be suggested by the code completion system. I've used this to finally add a completion for a class extension. One small issue is that snippets (especially ones you've made) can sometimes get in the way of other items when you bring up code completion.

The media library also doesn't seem to fully realise its potential. When dragging an item from it to code, it only puts the file path in the editor. It would be nice if it added the appropriate code, such as [NSImage imageNamed:@"MyImage"] for images. (rdar://8993594)

Jump bars

Jump bars are a relatively new concept in Xcode. They've existed in other applications for a while and can be quite useful. Essentially they're a combination of a breadcrumb bar and a navigation menu. They let you navigate through your entire project. Their use for navigation between files is somewhat limited though. They are quite handy, but I suspect those who are used to using the Groups & Files list will use the Project navigator and those used to Open Quickly will continue to use that. However, where they excel is navigation inside of a file. The class overview popup from Xcode 3 is now part of the jump bar, and when you open a nib you can navigate quickly down through to the object you want. When you want to quickly jump around in a file or select something deep in a view hierarchy, or even just get an overview of a file, they are an ideal tool.

New key bindings

Apple has taken the the opportunity to re-assign many key bindings in Xcode. Many are ok but some are downright annoying. Open Quickly is now shift-command-O, which while this makes sense means that all the debugger keys have had to be changed. Step Over/Into/Out have been assigned to F6, F7 and F8 respectively, in a nod to other IDEs but ignoring that fact that these keys often do stuff already, usually system wide stuff.

On the plus side though, the general UI is much improved. You can see all key bindings at once or filter by Menu or Text bindings. By default the various groups are open, which while a relatively minor change is much appreciated. And last but by no means least, we finally have a search field to filter down the various key bindings to find the one you want. That said, there are options missing. There is currently no way to customise the key bindings for switching between the inspectors or libraries for example. (rdar://8897968)

Editing: Xcode gets brains

There have been many criticisms of Xcode, especially from those coming from other platforms. "The UI sucks", "the key bindings are different", "it doesn't give me a blow job upon a successful build". Most of them are silly complaints that boil down to either a) Xcode is different, or b) an inability to check if a preference exists. However, there are lots of valid criticisms about Xcode, especially with regards to code completion, refactoring and other similar features. All these ones boil down to a very simple fact: Xcode doesn't really know much about your code. Thankfully Xcode 4 is more than just a pretty face, it also has brains.

The reason for Xcode's lack of code knowledge is quite simple. Xcode (and before it, Project Builder) has had GCC as its main compiler. GCC was built for compiling code, so it's essentially a black box you pass some code and settings in and get a binary out of. It is also GPL licensed which means that it needs to be held at the end of a large pole to stop Xcode succumbing to it. This has meant that Apple has had to build their own parsers for indexing the code and providing code completion, syntax highlighting, refactoring and other similar features. As such there has been a divide between what the compiler parser (the ultimate truth) sees and what Xcode's parser sees. This has led to much hilarity down the years, including many a scene of developers shouting obscenities at inanimate pieces of plastic, metal and silicon.

Then along came LLVM and the Clang frontend. A quick bit of compiler design knowledge first. Compilers essentially have 2 main components: a front end which parses and validates source files (there is usually one per language or cluster of languages) and a back end which takes the output of the front end and converts it into machine code. LLVM is the back end and Clang is the front end for parsing the main C languages of C, C++ and Objective-C (hence the name C lang).

Like GCC, LLVM is also built for compiling code, but unlike GCC it isn't the sole purpose driving the design. It is also built to be an modular, easily extendible system upon which other tools can be built. Clang builds upon this and, on top of providing a compiler front end, also provides a public API that allows you to access the compiler's internal index of your code. As such you can build all sorts of tools upon it, all based upon the same view of the code that is used to build the final product (and is therefore the truth). As an added benefit it is released under a licence similar to the MIT/BSD licenses meaning it can be tightly integrated into Xcode without needing to open up the Xcode source. This extensibility is what gave us the Static Analyser and what now drives much of the new and improved features below.

Improved code completion

The code completion in Xcode has gone from "reasonable but lacking" in 3.2 to "competes with the best" in 4.0. The compiler knows what it should expect and so is able to offer you only the suggestions that should work. It actually seems intelligent in its suggestions as opposed to 3.2, which showed pretty much every symbol under the sun until you'd typed half of what you wanted. As previously mentioned, it doesn't show any form of documentation, which would help a huge amount, but hopefully we'll see that in a future release.

It is hard to adequately explain how much of an improvement it is in words. The best way is to show you an example. Take the header file on the right. It lists two properties and a method. We want to complete these in the implementation. Below are screenshots of what Xcode 3.2 and Xcode 4 show if you type a little bit and then hit escape to bring up the completion list:

Xcode 3
Xcode 4

As you can see Xcode 3 shows all sorts of nonsense completions for @synthesize, whereas Xcode 4 only shows the two properties. For the method, it takes into account the return type of the method and only shows the methods that return void, unlike Xcode 3 which shows any methods beginning with 'w'. On top of this, after you've synthesised a property or implemented a method, the auto complete will no long give you it as a suggestion.

The code completion is based upon Clang 2.0, which as well as C and Objective-C, now has full support for C++. I'm not a big C++ user, but from the demos I've seen the code completion for C++ is just as good as for plain C and Objective-C.

Fix It

Clang does a much better job than GCC at dealing with errors while parsing. When it encounters an error, say it expected a semicolon, when it continues parsing it assumes that one was there. This isn't perfect but it does help reduce nonsense errors. Of course because Clang has some idea of what would fix an error or a warning, it is able to also provide that to tools that use it. As such, Fix It is born.

Fix It is very simple. It parses your code as you type, highlighting warnings and errors and in some cases provides suggestions for how to, well… fix it. The example on the right is one I commonly hit, caused by the ambiguity of assignment in an if statement. While the code is fine, it ideally should be enclosed in another set of brackets so as to differentiate it from where you meant to check equality. As such Xcode offers you two options to correct it.

There are plenty of other fixits including for misspelled symbol names, missing semicolons and many more. Sadly, one Fix It that isn't there but I'd like to see is for importing a header. (rdar://8813882) The compiler knows when a symbol is missing and the full project index should be able to give a list of files where the symbol is potentially defined, so there should be enough core components in order to add this functionality. Hopefully we'll see this and other Fix Its in the future as Apple improves the feature.

The compile as you type also ends up feeling like a game at times. It's a race between you and Xcode as to who can detect a problem first. I've actually found I type and find errors faster because of it, even in other apps that don't have it. Sadly though, it does sometimes fall out of sync with the compiler, despite using the same parser. I suspect this is mainly due to refresh intervals and switching between files can often help clear it up.


One of the weaker aspects of Xcode is refactoring. Prior to Xcode 3 there was no refactoring at all. I was hoping that with the integration of Clang that Xcode 4 would offer improved refactoring, but it seems this area is relatively unchanged. The UI is a little different, but you get the same few options: rename, extract, move up, move down, create superclass, encapsulate (modernise loop has been removed). There are still the same problems where Xcode is very temperamental at allowing you to even attempt a refactor. Hopefully in future versions we'll see some big improvements.

One area of refactoring that has improved a lot is the Edit All In Scope feature. It finally properly respects scope, unlike in Xcode 3 where "scope" meant "method". Unlike with Xcode 3 I've yet to swear at the feature and wish painful deaths upon its implementors, which is a good litmus test for whether something is well implemented or not.

Interface Builder Built-In: Xcode and IB, sitting in a tree

Ever since the dawn of time, there has been one app for editing code, compile, debugging, project managent etc and then Interface Builder for editing user interface. Back in the IB 2.x days you had to copy the headers over to IB at regular intervals in order to expose outlets and actions to connect to. Then with IB 3 there was some degree of integration, with all the header copying being handled automagically. Sadly parts of this integration broke with successive releases (such as Xcode saving nibs before compiling) and it was still really a hack. Apple has decided that enough is enough and has built Interface Builder into Xcode with version 4, making it a truly integrated development environment like many other IDEs.

Improved UI

So the first thing you notice is the improvement to the UI. Rather than each window, view and menu floating around on your screen they are all on a canvas with a nice grid paper background. One of the best results of this (besides no longer having to search for that tiny view amongst a sea of windows) is that you can resize windows and root views from any side, making it easier to just resize just vertically or horizontally.

On the left you have the document well, which contains the objects in your NIB. Much like the dock, there is a divider between types of objects. At the top you have various proxy objects such as files owner, first responder and the app and below you have the actual objects in the NIB. A little circle next to objects shows that they're currently visible on the canvas. If you're wanting more than just icons though you can expand into list view (column view has been removed, though the jump bar provides an adequate replacement). In list view you gain the ability to drill down through the hierarchy, rename objects and filter them.

The inspectors are pretty much the same as they were before, though they have been re-arranged somewhat. Also if you're used to switching between them with hot keys, you'll need to throw out that muscle memory as the IB inspector hot keys are now taken by the navigators. I don't find the inspectors as visual appealing as the IB 3 inspectors. It feels like the controls have been squashed too closely together, though this is a minor complaint.

One small loss though is that of the font menu. I used to like being able to take a label and hit command-B to make it bold or use command-+/- to increase or decrease the size. Those commands aren't available any more so you have to resort to the inspector.


As you would expect with building IB into Xcode, the integration between your NIBs and your code is even greater than ever. IB has up to date knowledge of your code, though there are some cases where a save is still required before it picks up on changes. It also addresses the longstanding bug with Xcode 3 where it wouldn't warn you of unsaved NIBs when you wanted to compile.

One brilliant example of where integration has been improved is with bindings. In IB3, you could set the class that a controller represented, but that would do little to aid you when filling out the key paths. You only got completion suggestions for key paths you had already entered. In Xcode 4, you will get suggestions for the properties on the class. Better still, it isn't limited to just properties on the current class but also on any properties on the objects returned. One limit though is that it only works with properties and not standard accessors. Xcode 4 will put up a small warning (though not a compiler warning) if it can't validate the keypath, so for example doing myString.length as the keypath, while valid, will show the warning as length is not a property.

The most demo-worth improvement though comes from the ability to have code and NIB side by side in a split view. Previously if you wanted to define a new action or outlet, you had to go to the header and type in the definition, then go back to IB and hook it up and then go back to your code again to synthesize the property or write the action. All that work has been reduced. With the code and NIB in different panes of a split view you can drag from your NIB to your code to hook up or even create a new outlet or action.

Now many people have seen that, but what I doubt they have seen is that this also works for bindings. I discovered this entirely by accident while playing around one day.

Loss of IBPlugins

For all the cool new hotness in Xcode 4, there is one big gaping hole in its functionality. IBPlugins cannot be loaded and as such any NIBs that rely on components from them cannot be edited. They can be compiled, but only if you have Xcode 3 and IB 3 still installed. This is a major missing component, though it isn't unexpected. A line has to be drawn at some point in order to say "this is the first version we'll ship" and IBPlugins are likely one of the things that didn't make the cut for 4.0. Of course the big issues with IBPlugins is they don't work with iOS. My hope is that they're missing because Apple is working on an improved system and would rather leave it out of 4.0 than ship a half implemented solution. That said I hope they don't completely replace the IBPlugin system as it is incredibly powerful and one of the nicest plugin systems I've coded for.

Project Editor: I want to marry whoever worked on this

You may not have gathered this from the header, but I think the project editor is one of the best new things about Xcode 4. It is the best example of how the all in one layout allows for vastly improved UIs for various components. So what is it? Well it is an editor for project and target settings. You can set up build rules, build settings, build phases, the Info.plist etc. It has had a huge amount of thought put into its design. I'm going to go through a few parts of it to highlight some of the best improvements.

Target Info

In Xcode 3, there wasn't much in the way of dedicated UI for updating the Info.plist. There were a few field and a table for document types, but at times it was easier to just edit the plist manually. In Xcode 4 this has changed. The dedicated fields for the UI have been slimmed down to just the essential with almost everything else being put in a plist editor. Below this editor though are several sections dedicated to certain sets of properties: document types, exported/imported UTIs, URL types and service. These were some of the more esoteric things to manage in your Info.plist and having a dedicated UI makes them easier to create and manage.

Build Phases

Build phases in Xcode 3 are basically just groups you threw some items into and maybe set a few properties on. In Xcode 4 each type of build phase has a dedicated UI, which is afforded them by the massive increase in screen space available. For example, the Copy Headers phase in Xcode 3 was just a list of headers, with no indication of what visibility those headers had. Instead you had to go to the target details view and modify them there. The increase flexibility allows for a vastly improved UI that groups headers by their visibility and lets you drag headers between them to change. The compile sources phase again used to be just a list of files. In Xcode 4 it uses the additional space to show you where on disk the file being compiled is and also view and edit compiler flags for specific files.

The biggest improvement by far though is to the run script phase. You had to get info on the phase in order to edit the script and then only got a rather small plain text area in which to edit, offering you none of the benefits of the editing window that you could use for your existing code. Each run script phase now has a much more fully featured code editor with syntax highlighting, line numbering and auto indenting (this editor is available anywhere that you can write a script in Xcode).

Build Settings

Build settings in Xcode 3 were a mess. You had project level settings, various target level settings, each needing their own window to view. On top of that you had to switch between configurations using a drop down in order to see differences. Well Xcode 4 has a nice big window in which to put this information. Wouldn't it be awesome if it was put in there? And what if you could see project level and target level settings side by side (and have highlighted where the resolved setting is set)? What about viewing all your build configurations at once? And maybe the ability to select multiple targets and compare their build settings? And would you like a pony with that too? Well thankfully Apple has solved all of those with the build settings window (well, except for the pony, but you can't have everything).

The new build settings panel is the posterboy for the possibilities opened up design-wise by the new UI. It takes a horribly complicated system and, while not changing anything about how it works, turns it into an understandable, elegant and, dare I say, beautiful piece of UI design.

The Assistant: "It looks like you're writing some code…"

In software, there is a large spectrum between applications that are "dumb" and only do stuff when you give it explicit input and apps that are "smart" and can preempt you. Now few apps are completely dumb and offer some degree of smarts to try and preempt you. The issue is, the more smarts you add to your app, the more you risk falling into the uncanny valley of application intelligence. The software is smart enough to do something for you, but not smart enough to do it well enough of the time, leading to frustration. It is this uncanny valley where such much loved features as the Office Assistant (aka Clippy) reside. Add to this list the Xcode assistant.

The Assistant is essentially a split view, letting you see multiple files at once. You can click a file to open in the left view and option click to open in the right view. However, it also can work out files that might be relevant based upon the file open in the left pane, such as subclasses, counterparts, protocols, files that include this file, etc. You can access these manually using the button to the left of the back/forward buttons in the jump bar. However, you can also set the assistant up to automatically update other panels based upon the file open in the main panel. I just question how useful this automatic setup is in real development.

There are some annoying issues with the assistant. For example, counterpart and automatic are different modes, despite being effectively equivalent. They show the most appropriate source file for the current selection. But counterpart is only available if you have a source file on the left and automatic is only available if you have a NIB or Core Data model. And switching between source and other files requires you to set the mode of the assistant again. It also seems to hog the option key, meaning that if you want to use option in a key binding, there's a chance it may also cause the assistant to rear its head.

I also wonder how useful the other modes are for being shown automatically. There are generally more than one file in those groups and so you rarely get the correct one. And the one mode that would truly be useful, showing the header for the current selection, doesn't seem to exist. If I select an object that is an NSArray, showing the NSArray header (or even better, the full NSArray docs) would be more beneficial.

Ultimately, the assistant isn't bad, it's just not all that useful. It serves some needs but these are relatively rare, and the needs it does solve are better suited by other measures. For example, the counterparts mode is better managed in my opinion by the Jump to Counterpart command. Down the line the assistant may become much more powerful, but at the moment it is too weak to be of use beyond a regular split view.

Schemes: not the evil variety

The two main new concepts in Xcode 4 are Workspaces, which I've already covered, and Schemes. Now Workspaces are fairly easy to understand. Unfortunately schemes are probably the most confusing new feature for existing Xcode users. It is well worth working through that confusion though as they are quite powerful and at their heart are actually quite simple. I'm not going to go into too much detail about schemes now as I'll be posting a dedicated blog post on them in a few days.

They primarily exist to replace the previous collection of settings for the launch location, the active target, the active build configuration etc. Now while there are many combinations of settings available, there will only be a few that you will ever use. A scheme is essentially encapsulating that, allowing you to configure those few combinations and switch between them. But they're much more powerful that that as well. They let you set up an environment in which the actions of building and testing, running, archiving, analysing and profile are defined. You can configure what environment variables to do on run, what tests to perform when you test, what templates to use when you profile etc. You can also define actions to run before or after a stage, though these are currently limited just to running scripts and/or sending emails

While confusing, they are potentially one of the most powerful changes in Xcode 4 and can massive simplify the process of actually building and acting upon your code.

Debugging: better than just swatting at flies

Debugging is an important part of your development workflow. You can potentially spend half of your time, if not more, debugging. Xcode hasn't been too bad for debugging, but not too great either. Much like how many features have been hindered by a monolithic, old, GPL based compiler, debugging is somewhat limited by a monolithic, old, GPL based debugger: GDB. Of course not all of the blame is with GDB, there are parts of the debugging UI that could have been improved. Apple has addressed both of these to varying degrees in Xcode 4, though they've had more success with some parts than others.

Improved UI

The UI for the debugger is much more elegant in Xcode 4, while also being a lot more powerful. As previously mentioned, the stack trace has been moved to a navigator. Another navigator available is the Breakpoint navigator. This is the one stop shop for managing breakpoints. You still set a basic breakpoint by clicking on the gutter in an editor, but you can create symbolic and exception breakpoints from the navigator.

To edit a breakpoint you simply right click and choose "Edit". This brings up a popup window with various options, depending on the type of breakpoint. For plain file-based breakpoints you can set your usual condition, ignore count and actions. Symbolic breakpoints you can set the symbol and library. Exception breakpoints are the nicest though, giving you simple pop up lists to choose between stopping on all exceptions or just C++ or Obj-C exception, and to choose whether to break when the exception is thrown or caught.

The main debugger UI appears at the bottom of the screen. It has 2 columns: variables and console. The console view is the same standard console as Xcode 3. However, you can now choose to filter the output to show only debugger output or only output from the running target. The variable view is also much improved. You can choose to show all variables, just local variables or have Xcode automatically decide which variables to show. This last option is probably the most useful as it shows the variables relevant to the current context. It will show the local variables, but also other variables accessed within the current scope, such as instance variables. You can also perform a textual filter of variables, both by name and by value.


In order to "fix" GDB, Apple has opted to replace it. The replacement takes the form of LLDB, or the Low Level Debugger. As you can probably guess, LLDB is to LLVM what GDB is to GCC. But it is actually much closer. LLDB is built on top of LLVM for things such as expression parsing and fundamental data structures. As such LLDB should (theoretically at least) never lag behind the compiler in supporting new language features.

The main design focus of LLDB is on speed and efficiency. It aims to be much faster than GDB, take up less memory while running and be much more intelligent about what it needs to parse. It also focuses on ease of use. The command structure is designed around consistency and a basic noun verb format. And on top of this, much like it's sibling compiler, it aims to be highly extensible. LLDB exists as a framework, LLDB.framework. Both the command line and the Xcode interfaces are built upon this framework. There is also a Python scripting interface that has full access to the LLDB API. It is as much a tool for building debuggers as it is a debugger itself.

The use of LLVM for expression parsing means that LLDB should be much better than GDB at supporting all the language features the compiler does. It performs JIT compilation of expressions you enter. Now the key word is "should". From my experience it is somewhat lacking in its expression parsing. It's no worse than GDB, but it falls short of the promise of high quality expression parsing. One example from my testing is that it, like GDB (and myself), is not a great fan of the Obj-C property dot syntax.

And this really is pervasive throughout LLDB. It seems half baked at the moment. It is a coin toss as to whether it will actually load your application when you launch and I've had occasions where it will cause Xcode to hang, requiring a force quit and relaunch. This isn't to say that there is something fundamentally wrong, it is simply the first release. Much like when LLVM was first added to Xcode, it was nice to experiment with, but wasn't really ready to use as a main compiler. I highly recommend trying it out, it may work better for you than it has for me. Hopefully, over the next several releases of Xcode 4 we'll see it mature to the point where it is stable enough to drop GDB and use it full time.

No "Preprocess" or "Show Assembly Code" commands

There has been some controversy over the loss of two commands in Xcode 4 that aided debugging. The Preprocess command ran your source file through the C preprocessor and displayed the output, which was incredibly useful for debugging preprocessor macros. The Show Assembly Code command showed the assembly that was generated for the class, so you could debug issues or look for performance improvements. Neither of these two features are available in Xcode 4, providing no obvious way to replace this functionality. I haven't been affected by them too much as I rarely need them, but for those who do it is potentially a reason to hold off ditching Xcode 3 completely.

Version Editor: a TARDIS in your IDE!

The common theme with Xcode 4 is the folks at Apple saying "What was a valid criticism of our dev tools?" and looking to fix it. One of the valid criticisms is that the SCM support was pants. It was hard to set up, difficult to keep working and generally it was easier to use another tool. The Version editor is what serves as replacement for what we previous had, and it is a pretty reasonable effort.

The key thing about it is that it works and it works quite well. It supports Subversion and Git (and drops support for CVS and Perforce). It's not exactly exposing every last feature and only has the essential operations such a branching, committing, logs, annotate/blame, merge and push/pull, but you don't need much more than that for day to day work. Unfortunately there is no support for other DVCSs such as Mercurial and Bazaar and no sort of plugin system to allow 3rd parties to support them. Hopefully they'll make it in, but it's more of a nicety than a necessity.(rdar://8583054)

And that's really the Version editor in a nutshell. It is the 3rd main editor view after the Standard editor and the Assistant editor. Rather than something you rely on in order to get anything done, it is more something that is nice to have, if you use Git or Subversion, but you don't necessarily miss using it when you don't. As a Bazaar user I haven't had too much use of it and haven't found it affects my use of Xcode 4 at all.

Comparison, Blame & Log Views

But you don't want to hear about not using it, what is it actually like to use? Well as I've said it only supports the essentials at the moment. There are 3 main views available to you: Comparison, Blame and Log. Comparison view is your basic diff between two versions. You have one version of a file on the left, another on the right and you see the difference between them. Jump bars at the bottom let you switch between revision, but there is also a timeline that can be slid up to give you a more temporal view of your commits, showing commits grouped by days and their commit message as you hover over. It's a nice way to quickly go back through your history.

The Blame view is your basic who did what view. Each line has the commit's user, message and date. Each different section has an arrow that when clicked will take you to a diff of the commit in the Comparison view. Finally the Log view is about the most basic Log view you'll find. Message, date, committer's name and revision number/hash. You can't even click on a commit to see the selected file as it was there, you need to click an arrow to be taken to the Comparison view.

The one thing that is apparent though is that not too much time was spent on the UI aesthetically. It is perfectly functional and very easy to use, but the Blame and Log sections are rather… meh, visually. The Comparison view would be the same but the timeline is a nice visual touch. It seems like a bit of a nitpick, but given how much of the UI is nice to look at in Xcode 4, it's a shame these are so bland.


I'd like to highlight committing as it is the one area where Xcode 4 does something that is (as far as I'm aware) unique. Now the commit sheet is your basic commit UI. It shows the changes being committed (either as a flat list, their in app representation or on disk representation) and shows you a diff of the changes. You can also continue to edit in this sheet if you wish. However, the real magic comes if you've got multiple projects in a workspace, one in Subversion and one in Git. If you make changes to both and then commit, you'd expect to have to commit the Subversion changes and then the Git changes. But what you can do is commit to both, with the same message, with just one action. Xcode handles everything for you, meaning that Subversion and Git are not just equal, but you don't really have to care too much which is used under the hood.

Documentation: RTFM!

Apple has tried repeatedly to get documentation right. They have some of the highest quality documentation around, but accessing it in Xcode was always a pain. Many people used it only if needed or switched to other apps to handle it. In Xcode 3.2 they finally nailed it. It was quick and easy to use from the keyboard, it was incredibly functional, yet was very simple. You could get in and out with minimal fuss. It was a joy to use.

Unfortunately they decided that, as they're redesigning everything else, why not also redesign the Documentation viewer. What could possibly go wrong? Well unfortunately a lot. Firstly, it doesn't search as you type any more. I've used search as you type many times when I don't quite know the name of something but sadly this is missing.(rdar://8691689) Next, keyboard navigation is messed up. The search options, rather than the search results, are now directly after the search field when you tab. Of course you could just hide the options, but then you can't see what options are selected and if, like me, you're often switching between iOS and Mac development, you want to be able to change the scope often.

The search results are also much less useful at a glance. In Xcode 3.2, the top section of results was for API results that match. It listed functions, classes, methods, constants etc as a flat list. In Xcode 4 they're now an outline view grouped by the file they're in. This means to actually see the search results you have to go through and open every bloody arrow.(rdar://8993605)

And then the actual documentation view itself is vastly inferior to what was in Xcode 3.2. Gone is this table of contents sidebar that was easy to glance at and scroll through and gave a great overview of the class. Sadly, jump bars have replaced it.(rdar://8689086) While jump bars are great in most of Xcode, this is one of the places where they are crap. They replace a list with a large target area for scrolling with a series of nested menus that are a pain in the rear end to navigate. And to top it all off, those occasions where you would like to view the PDF version? Well tough cookie, there's no way to view it beyond opening in the browser first.(rdar://8993604)

Of course, screwing up the documentation view isn't the only thing that has happened. You can no longer command-option-click on a symbol in the editor to use it as a search term. This was one of the most frequent ways of searching for documentation, but now the only way to get to useful documentation on a symbol is to get quick help and then go to the full documentation.(rdar://8689104)

Of everything in Xcode 4, this is by far the worst change. They've taken arguably the one thing that Xcode 3.2 got very right and turned it into, what can most politely be described as, a steaming pile of donkey shit. Sure you can use it to find documentation, but you can use TextEdit to write code, it doesn't mean it's particularly good at it.

Testing: 1,2,3

The best thing to say about automated tests in Xcode 3.2 was that they were possible. That's actually probably a little unfair. There were templates for unit tests, there was support in the build UI. It wasn't the dark ages, maybe just the 1700s. The big issue was that testing felt like an afterthought and was tied to the build process, with the job of actually testing thrown onto an external script.

In Xcode 4 testing has improved in almost every way. Almost every project template now has a checkbox to add unit tests to the project, meaning you no longer have to perform all the usual set up. In fact most of the setup is now redundant and simply there for backwards compatibility with Xcode 3.2. For example, there is still a build phase to run a script to run the tests, though this isn't actually used.

To run tests, you no longer select the test bundle target and build it. Instead you use one of the new menu commands: Test or Test Without Building. The former performs a build and then test, the latter, well you can guess. So how does Xcode 4 know what tests to run? Well you choose from the test section of the current scheme. Here you get a list of all the test bundles, test suites and individual tests and can check or uncheck those you want to run. This means you can set up a development scheme with a few tests and have a release scheme that runs your whole test suite. This is a massive jump over what was in previous versions of Xcode.

There was also another really major complaint about testing in Xcode, the inability to debug tests. The only way to figure out what was going on with tests was to throw in some NSLogs and then search through the build log to find the result. Xcode 4 now runs your tests in the debugger, meaning it will hit breakpoints in both the tests and the tested code. I know of a few people who may well be crying tears of joy at this.

For all the the wonderful improvements to testing there is one major downside. The UI for testing is still in the build results, which is rather cramped. It would be nice to have a dedicated UI for testing, maybe a navigator, that let you view the test successes and failures and maybe view past runs. (rdar://8993596) Besides this though, testing has made a massive leap forward in Xcode. If Xcode 3.2 was the 1700s, we're now approaching the late 20th Century in Xcode 4.

Behaviours: be on your best!

Behaviours at first seem like a minor addition to Xcode. They let you make things happen when a certain event happens, such as a build succeeding or a test failing. You can have sounds play, bezels show, switch navigator or tab, run a script etc. These are all quite useful abilities. For example you can have the debugger show when you start running and hide when you've finished a run. Or you could have Xcode read out the results of a long build once it is finished.

There is nothing too exciting about them in their current state, they're just a nice but minor feature. So why dedicate an entire section to them? Well it is their potential that is so exciting. Imagine if you could have a behaviour for when a certain type of file is selected. (rdar://8689025) When a NIB is selected you could switch to the assistant editor to let you hook up connections. When a source file is opened you may not need the inspectors and library so you could have them hide. What about behaviours for when you add or remove a file from the project? There are lots of possibilities available for both events that cause behaviours and behaviours to perform. Hopefully we'll see lots of improvement to this feature in the future.

One flaw though is that behaviours are global. It would be nice to have either behaviour sets or even project specific behaviour overrides. (rdar://8935330) One example is that I have the debugger set to hide when I quit the running application. That is great for regular apps, where any console output I want to see I can see before I quit. For command line apps though, I want the debugger to stay up after I quit. At the moment I need to change my prefs depending on which project I have open. The ability to override these in the few command line apps I have would remove this hassle

Conclusion: yay or nay?

Xcode 4 is an interesting contraption. It has 4.0 as its version number, yet it is almost a 1.0. Xcode 1 to 3.2 were almost transitional, helping the migration from Project Builder to what we have now. In a sense Xcode 4 shouldn't be judged on what it is, but what it shows it will be. The one thought that keeps popping into my head while using it is that there is a lot of cool new stuff, but it is lacking. The foundations are pretty much all there to build an Xcode that can compete with the likes of Visual Studio and Eclipse on all fronts. They just need fleshing out more. There are very few areas where Xcode 4 is worse than previous versions. The majority of those areas are where the features simply aren't there, but where they may re-appear in future versions. In every other area it offers major leaps forward in usability, performance and enjoyment.

I've always considered Xcode to be the nicest IDE to use in terms of user interface, though it wasn't exactly pretty. It's just the competition was so ugly and cluttered. I've often likened saying Xcode 3 was the prettiest IDE to saying that it was the nicest smelling dog crap. Xcode 4 however, genuinely is pretty. I love looking at it and admiring the amount of work that has gone into it. Apple has had some of the prettiest developer tools in terms of Interface Builder, Instruments, Quartz Composer etc for quite a while, but Xcode was dated. It now feels like the sort of user interface you'd expect from Apple.

Some people though, claim that it is the iTunes-ification of Xcode. That it is a step backwards. That there is too much garnish and not enough meat. It's a valid complaint, as Xcode 4 does lack some of the more advanced features of Xcode 3. But it is wrong to say that iTunes is the inspiration. iTunes isn't particularly great. It is old, clunky and somewhat ugly. It is more akin to Xcode 3. Xcode 4 is much closer to iLife or iWork. It is modern, it puts a large amount of focus on the UI and it tries to use great engineering to reduce the work you need to do to create great results.

So should you switch to Xcode 4? Yes, absolutely. But I'd keep Xcode 3 on your system for a while longer. The lack of IBPlugin support makes that a requirement for most Mac development. But you shouldn't give up Xcode 4 completely just for a few missing features. The many areas with major improvements more than make up for the few areas that are lacking. After using Xcode 4 for several months, I can't really move back to 3.2. It just isn't the same.

Every developer should heap as much praise on the dev tools team as they can, as they have done a fantastic job. It's hard to take a large, mature and very familiar application and take a step back and ask "how could we do this better?" Sure it isn't to everyone's liking, but nothing is. But the beautiful thing about Xcode 4 is that it is still a lump of clay that can be sculpted. There is still a lot that can be done to improve it. And if you want to see improvements, just remember to file bugs.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Core Developer Principles tag:pilky.me,2011:view/14 2011-02-21T00:00:00Z 2011-02-21T00:00:00Z Martin Pilkington pilky@mcubedsw.com This is a blog post I've had on my "to write" list for a while. At the end of December Buzz Andersen posted a link on twitter that outlined why he doesn't feel comfortable with Pair Programming or the Agile ideal. A conversation followed between several developers, including myself, which was nicely archived by Manton Reece using his Tweet Library app. I recommend reading through the archive to get an idea of what was said.

Now a few weeks later when Manton posted the archive a thought occurred to me. There is a lot of stuff out there which developers need to look at. Lots of concepts, lots of patterns, lots of tools. With this massive jumble of stuff it can be quite easy to lose sight of what might be the core principles.

There are very few absolutes in programming, very few things that, if you don't do them, then eventually things will blow up. These are the core principles of development. There are lots of things out there that claim to be core principles, but you can develop good software without them, and they are usually just one way of using the core principles. The problem is that it is hard to figure out what these core principles are. I don't know what they all are, I doubt anyone does. But you can tell a core principle when you see it. I'm going to go through two examples of things that some consider core and what about them is actually core.

Test Driven Development

Test Driven Development, or put another way: test first, code later. You write your software by writing a test for a method or function or component and then writing code to make it pass. There are a lot of people out there who swear by TDD and some of the more extreme TDD advocates go as far to say that TDD is the way everyone should write code. Now it is true that TDD has advantages, but it also has disadvantages. There are also some things that depend on your view of the world.

As one example, TDD solidifies the interface for a method/function very early on as. Some may consider this good, it helps lock things down, it makes you think things through more in advance. However, some (including myself) consider this early solidification annoying. It just doesn't feel right, we prefer things to stay fluid in the early stages and so we want to be able to tinker. As such, when we decide to make a change, we don't wan't to also have to update all our tests to cope with that.

People like me tend to do the opposite of TDD: code first, test later. Whereas a TDD user would write their test, run the test, write the code, re-run the test, I follow the somewhat simpler tactic of write the code, run the code, write the test when the code works. Of course, even I find TDD can be very helpful in some circumstance, such as writing a framework of library from scratch.

The basic point is, there are different views of testing here. The first is that tests are a development aid, they help you build software. The other view is that they are simply a testing aid, they help you from breaking stuff that already works. Neither view is wrong. There has been plenty of good software shipped that doesn't use TDD and plenty of good software that has been shipped with TDD. You can get away with using either view. There are even more views as well. Sometimes I don't write a test at all, I just test manually. Automated tests are better, but sometimes I simply don't have the time, or maybe the need (especially if it's a test/toy project).

So we have all these concepts, plus many others. They all seem different, and people have long and heated discussions about them and which is better. But there is one underlying theme in all of them and it is a core dev principle: you should be testing your code. Be it writing tests first, writing tests later or just manually running the code, you should be testing. How you approach testing is entirely up to you, but the benefit of any form of testing over the other is minuscule compared to the benefit of any testing over not testing at all. If you just write code and ship it, there is almost guaranteed to be bugs. You cannot survive long without any testing.

Pair Programming

Pair Programming is a where you have two people looking at the same code at the same time. They don't necessarily have to be in the same room, or even on the same continent, just as long as they're both looking at the same code. The idea is that the whole is greater than the sum of its parts, i.e. two programmers working together on the same code produces better results than two programmers working separately on different code.

A lot of people swear by Pair Programming and do it everywhere. And pretty much every developer has pair programmed at some point in their life. But like TDD, it isn't necessary for writing great software. You don't need two pairs of eyes on the same code all the time. Some people prefer it, but quite a few don't. They feel uncomfortable with Pair Programming, they don't feel as free or as creative. Ultimately it leads to them not being as productive as they would be if they coded alone.

In the archive I linked to above there is a tweet of mine where I say that "I've found the only useful pairing sessions start with the words 'I cannot figure this out'". For everyone, Pair Programming has its benefits, but they aren't there all the time. Due to this is cannot be a core dev principle. Think of all the great code that has been written by a single person. It's not going to completely fall apart in the near future. But there obviously is something behind Pair Programming that is a core principle, as everyone finds it useful at some point.

That core principle is code review. Now this doesn't have to be where a whole team gets together. You don't even need another person. It can just be yourself, review code you wrote last week, last month, last year. If you can get someone else to help then great, having a fresh pair of eyes can find problems you've missed. It also doesn't need to be comprehensive or on a fixed schedule. You don't need to review the entire codebase before each release.

Now you might say "well I can ship without reviewing code". To a degree, sure. But for how long. If you never read over code you've already written, how long before something falls apart. Of course I would also say that it is almost impossible to ship without review code. You're constantly reviewing the code you write. It is a natural part of writing code, that you read it back.

Don't miss the forest for the trees

When someone says "if you aren't doing X then you're a bad programmer" then listen to them, but don't believe them. Question what X is. Can you write good software without doing it? If yes then you aren't a bad programmer, just a different type. If the answer is no, as with testing or code review, then you have found a core principle and yes, if you aren't doing them you're a bad programmer.

There are many core principles out there, but I wouldn't be naive enough to think I can list them all. It can also be hard when there are loud voices speaking against you to not believe you're doing something wrong. There are also lots of things there may seem like core principles because everyone agrees you should do them, but they can be generalised. Take for example commenting your code and using logical names for variables and methods/functions. They all fall under the umbrella of "write readable code".

The vast majority of patterns and techniques out there are simply different manifestations of these core principles. You should feel free to use whichever one you wish, as each will have its own pros and cons. There is also a large degree of subjectivity, where one technique can feel better than another. Use what you find helps you the most and don't listen to the evangelising from others. As long as you have the core principles covered, you can write good software.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Be Explicit (Except Where You Can't) tag:pilky.me,2010:view/13 2010-12-04T00:00:00Z 2010-12-04T00:00:00Z Martin Pilkington pilky@mcubedsw.com I've just finished watching David Heinemeier Hansson's keynote talk at RubyConf. It's a great talk about what makes Ruby great in his eyes and why he hasn't bothered learning other languages since discovering Ruby. There are some things that I disagree with or that contradict each other (at one point he says Ruby protects you from pointer arithmetic, but later that he likes Ruby because it doesn't stop you from doing things even though they can be dangerous if used incorrectly) but on the whole it's well worth watching.

Except for one point. It is about a sentiment that, at least to me, is shared a lot in the Ruby community (and also to a degree in the scripting language community in general). To quote DHH from the talk: "The programming equivalent of having your balls fondled when you go to the airport is… type safety". Now this post isn't going to be about type safety as such but about something that encompasses type safety, which as the title of the post suggests is: being explicit.

Typing types

So lets cover types. There are really 2 major axes of typing: static vs duck, strong vs weak. Static typing is where you write type information, duck typing is where it is worked out at runtime. They are often seen as opposites but aren't necessarily mutually exclusive. Strong typing is where the typing is enforced by the compiler and/or runtime (e.g. you can't put an integer in a variable typed for a float) and is the opposite of weak typing.

Now, there are people that feel that writing any type information in code is wrong, you should just say "this is a variable" and you use introspection to work out the type at runtime. That's a perfectly valid point of view. There is another group of people who feel that you should write down every type and enforce everything. That is another perfectly valid point of view. But I find both of those extreme.

My opinion on strong vs weak typing is pretty clear. I hate strong typing. I find that it is like trying to protect yourself from bad things happening by wrapping yourself in layers of bubble wrap. Sure you're safe, but you're also restricted in your movement. I prefer weak typing as you're not required to do something. Static vs duck typing is different. I love duck typing, it gives you absolute freedom. I also love static typing though as it gives you a lot of information to build tools. This is part of why I like using Objective-C. I can choose to use the static typing for objects if I wish, but if it gets in the way I can just use the id type.

Now yes, writing out type information takes a bit longer, but I think DHH (along with many other ruby devs) has been stung by Java's pedantic-ness. Static typing gives a lot of information. It can make it far easier to write decent editors with smart autocompletion. It makes refactoring a LOT safer and more reliable. It allows for better static analysis and compiler warnings and errors. It basically makes it easier to have tools do stuff for you rather than you have to do it yourself.

You can work out types via type inference, but that is guessing. I much prefer facts to guesses and to try and be explicit rather than being implicit.

Explicitness Saves Kittens!

Explicitness is good, doubly so in code. Explicit code lays out its intentions and doesn't make someone have to figure out what the person who wrote it was assuming. There are many ways to be explicit: cautious coding, using static types, using obvious variable and method names, writing detailed comments for non obvious code. Where at all possible you should be doing these things and reducing where you are being implicit.

Now it's hard to get around the need to be implicit or make assumptions sometimes, which is why languages that enforce everything (eg Java) require more code to do the same thing than a language that doesn't (eg Ruby). But just because static typing can get in the way on the odd occasion doesn't mean that it is bad. If you have a variable that is going to hold a string, and you know it is going to hold a string and shouldn't be holding something else, then why not set its type as a string, so that if you accidentally set it to a different type, the compiler can warn you.

As an every day example of guesses vs facts, take buying a piece of furniture. You could guess "yeah, this table will fit in a space at home, it's roughly 4ft". But then you get home and find the 4ft gap you wanted to fit the table in is actually 3.75ft and it doesn't fit. But if you were explicit and measured the size of the space you would have a fact, which you can use to make a decision. Now things like that don't always come back to bite you, but they can and the more you are explicit, the less you'll get bitten.

Reversing The Analogy

So DHH used the phrase "enough rope to hang yourself" for an analogy for why enforcing stuff like typing can be bad. We don't stop people from buying long lengths of rope because one use case may be for someone to hang themselves. It's not the main use case for ropes. Now I agree, that you should't enforce something because you can do something bad with it, and you shouldn't enforce typing simply because you can mess things up. However, the reverse is also true. As much as Java enforces types, Ruby enforces the lack of types. There is no real way for me to be explicit, barring writing more code to check the type of every variable at runtime or writing unit tests.

On the one side of the argument is "you should always be explicit" and on the other side is "you should always be implicit". In language terms this is Java and Ruby respectively. Personally I think the best way to view things is "you should aim to be explicit, except where you really can't". In language terms this is Objective-C. You are persuaded to be explicit, because it makes it easier to catch mistakes, but it isn't forced upon you.

Explicitness should not be seen as restrictive and implicitness should not be seen as dangerous. If you are implying something to the extent that it can only have one outcome, then it costs nothing to be explicit. In programming that happens in the vast majority of cases. For the remaining few cases where you can't be explicit, then be implicit, but try to make it the exception rather than the norm, otherwise you're making the odds of something bad happening more likely.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Ideal Software Development Course tag:pilky.me,2010:view/12 2010-11-21T00:00:00Z 2010-11-21T00:00:00Z Martin Pilkington pilky@mcubedsw.com So I read through the slides of a talk recently, by the incredibly talented Anna Debenham. The talk is about the state of web education in schools. It is an incredibly good talk and thankfully there is a video of another talk by Anna that covers pretty much the same things here, which I highly recommend you watch. The thing is, what Anna says about web education equally applies to any form of software design and development. The education system teaches it badly, all the way from primary school through to university. They are either too theoretical or too outdated.

I've long thought about building the ideal software development training course. The problem is there is no good course that I know of that makes good developers. There are courses out there that teach you software engineering or computer science, but they output very few good developers. There are also many courses out there that teach you how to program in certain languages.

The issue is, a lot of these courses are either too theoretical (computer science) or too tied to a certain languages/toolset (pretty much all of them). You end up with lots of C programmers or Java programmers or C# programmers but very few software developers. Now, there is nothing wrong with computer science. It teaches people how to be computer scientists. The issue is that a lot of people who take computer science are wanting to become software developers, not computer scientists.

What is a software developer?

Good question section header! What makes someone a software developer and not just a programmer. Put simply, a software developer solves problems, a programmer just implements them. A software developer is capable of coming up with an app idea, of designing the app, of understanding user experience, accessibility, interaction design, dealing with user feedback. Basically a software developer could write a full app from scratch on their own, a programmer couldn't.

The issue is that people who are programmers are often classed as developers and this has led to such terms as 'rockstar developer' to mean 'someone who can do more than just program'.

What should be the principles behind this ideal course?

The overriding principle should be to teach people how to be software developers. It shouldn't focus on one platform, language, IDE, architecture, pattern etc but it should teach people how to learn and evaluate them. And it should do the things that other courses don't do or skip over. It should teach about version control and developer tools. How many people actually learned how to use a VCS at school or university? How many people had to learn how to use an IDE themselves? How many people were truly taught how to debug?

There is also little taught about building a product. Learning about algorithms and data structures is nice, but how do you pick an idea for a product, how do you design that product, both in terms of user experience and in terms of architecting it. How do you get to 1.0 and then what do you do after that?

And possibly the most important principle. This should not be seen as an engineering course or a science course. It should be seen as an art course. As much as we like to paint software development as an engineering or scientific discipline, it is much more of an art. Sure, there are best practices and patterns and such, but there are best practices and patterns in music, graphic art, literature etc.

So what should be the course syllabus?

I think there should be various threads to the syllabus of such a course:

  • Programming
  • Debugging & Testing
  • Tools
  • User Experience
  • Building Software
  • From The Experts

So what does each of these entail? Well I'll go into each in a bit more depth:

Programming - This will be a large overview of programming and be the closest to what other courses do. It will teach how to program, type theory, data structures and all that stuff. However, it will not just paint these in terms of theory but show how they apply to certain programming languages. It will cover various types of languages: procedural, object oriented, functional.

Most importantly it shouldn't focus on one single programming language. Ideally it would use scripting languages, compiled languages, dynamic and static languages, verbose and concise languages, high level and low level languages. And it would explain the benefits of each type of language. By the end of the course students should know 5-10 programming languages to varying degrees, but most importantly students should have the skills to go and master those languages and learn new languages.

Debugging & Testing - This thread will do exactly what it says on the tin. It will teach people about testing, all the way from unit testing to integration testing to UI testing. It will show them various tools and approaches. It won't cover anything like TDD as this is just a technical thread, that will be covered in Building Software. As for debugging, while it will teach how to use various debuggers, learning and using a debugger is easy. The key thing here is how to debug and basically building up the problem solving skills.

Tools - It's pretty hard to program without tools. So this thread will cover everything from text editors to IDEs to refactoring tools to version control software to compilers to issue trackers. It should explain the purpose of the various types of tools and show the basics of each tool.

User Experience - A software developer who doesn't care about user experience isn't a real software developer. This section will pound into the students the principles of user experience. It should focus on things such as usability, aesthetics, accessibility. It should teach some fundamental principle of UI design. While the student may not end up an expert, they should hopefully have an appreciation of UX.

However, this won't just be about UX for user facing apps, but should also consider the programmer as a user. As such it should also cover how to create clean code and design APIs. Not all software is written for consumers and some is written for other programmers. An API is simply another UI.

Building Software - How do I come up with an idea? How do I flesh that idea out? How do I create a list of functionality? How do I cut that down to a 1.0? How do I architect the software? These are the sorts of questions this thread should help answer. It should also cover various methodologies, giving pros and cons and examples but not pushing any single one over the other. It should look at how and why to choose certain fundamentals such as "web, desktop or mobile?" and "which language/API?". It should build up the skills of working in a team and actually creating real software. This is basically the thread that ties together all the stuff learnt in the previous 4.

From The Experts - The other 5 threads make a point of not saying "this is the way you should do this", but aim to teach many different ways, give objective overviews and essentially teach you skills rather than just knowledge. This last thread on the other hand is the opposite. It should bring in people from various parts of software development to give talks on subjects close to them and give their opinions on why you should do something their way (or even why you should do it another way). One week there could be someone saying that unit testing is useless and the next someone talking about why they write unit tests for everything, but this will give two opposing views for students to consider and make up their own minds.

Will we ever see such a course

I honestly don't know. If the money could be found and a competent set of people found to teach this stuff then it could definitely be done. But the question is whether the will is there. Would the industry see the potential of such a course and help back it. Would a university run it or would it need its own establishment? I'd love to see such a course and to have taken such a course and if the opportunity ever arose I'd love to help create and maybe even teach such a course. But until then, I'll just keep dreaming.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Why The Tuition Fee Rises Are Good tag:pilky.me,2010:view/11 2010-11-10T00:00:00Z 2010-11-10T00:00:00Z Martin Pilkington pilky@mcubedsw.com There has been a lot of anger over the Government's decision to increase the cap on university tuition fees to £6000, with £9000 allowed in some circumstances. There is also anger over the increase in interest rates. However, there has been little proper analysis into how this works out financially. After all, these aren't real loans.

So what is the situation for students today? You pay £3290 a year for tuition (this actually increases in line with inflation) and get roughly £3500 or so in maintenance loan (this is a rough guess based on what I got at university). You start paying interest on this loan at a rate defined by the rate of inflation at a certain point in the year. So already it is better than any other sort of loan. But where it is really different is you pay back a percentage of your income over a certain limit, which at the moment is 9% of income over £15,000. And if after 25 years you have any remaining balance, the amount is written off.

So what will change in the future? Well the fees will be around £6000-9000, the interest rate will be tapered, from 0% on incomes below £21,000 to 3% + inflation for incomes above £41,000, the pay back threshold will be raised to £21,000 and the payback limit will be increased to 30 years.

Now at first glance that seems pretty bad. Fees increase by 2-3x, a potentially large increase interest rates and it's 5 years longer before it is written off. But the key thing that is being missed is the £6,000 increase in the pay back threshold and that interest rates are progressive. This is what helps make it a more progressive system than before.

The Scenarios

So what does this ultimately mean? Well lets take some (very simple) scenarios. There are two components to each scenario. The first is whether they use the current or future fee system (and whether in the future system they use £6000 or £9000 a year). The second is the starting wage and average raise per year. So lets go over the fee system components:

Fee: £3290 a year
Maintenance Loan: £3500 a year
Pay back threshold: £15,000
Pay back rate: 9%
Pay back period: 25 years
Interest: rate of inflation

Fee: £6000 or £9000 a year
Maintenance Loan: £3500 a year
Pay back threshold: £21,000
Pay back rate: 9%
Pay back period: 30 years
Interest: 0% to 3% + rate of inflation

Two simplifications will be used. Firstly, I won't be increasing the fee in line with inflation each year. Secondly, I'll assume that interest is constant at the target rate of 2% over the payment period. These will also be for 3 year courses. For these fee systems I'll input some starting wage and average yearly raise figures and find out how much the student pays back. I'll use 3 raise figures: 2% (ie inflation), 4% and 6%. I'll also use 4 starting wages: £10k, £15k, £20k and £25k.

I've put the results in the table below. Where someone is better off under the future system I've marked it in green and where they are worse off I've marked it in red. I've also marked with a star where there is nothing left for the government to write off.

£10,000 £15,000 £20,000 £25,000
2% 4% 6% 2%4%6% 2% 4% 6% 2%4%6%
Current 222 6444 17524 94912247228416* 23905 26819* 25783* 25792*24923*24423*
Future £6k 0478222028 31812173845591* 16484 44361 45581* 3457848277*44737*
Future £9k 0478222028 31812173851952 16484 44361 67731* 345786949166326*

So as you can see, in most cases people are worse off. However, there are several cases where people are better off, on the lower end of the income scale. So the Government's claims that this new system is more progressive seem to be vindicated. It is also worth noting that the lowest yearly wage after 30 years of those who are worse off under the system is £45,000.

A Graduate Tax

The idea behind a graduate tax is simple: university for UK based students is completely funded by the Government. After you graduate you then pay a percentage of your income back for a certain number of years. The thing is, that sound strangely familiar. Those who never fully pay back their loans, and who the Government ultimately writes off the remainder for, are basically paying a graduate tax now.

If you look carefully at the table above you will see some peculiar figures. In the old system those who start on £25,000 end up paying more back the slower their wage rises. The issue is that they pay back the amount they owe before the Government writes off the rest. The more you earn the more you pay each year so the less interest you ultimately pay. The way to turn the system into a graduate tax would be fairly simple conceptually. Remove the notion of fees and just have everyone pay back 9% of their income over £21,000 for 30 years. As an example of what this would do, I've applied it to the two extremes of the starting wage below:

£10,000 £25,000
2% 4% 6% 2%4%6%
Graduate Tax 0478222028 3457869491121181

Out of the 6 fictitious people there, only 2 end up paying any more money than the £6000 tuition fee system and only 1 more with £9000 fees. Everyone who starts on £10000 pays no more than in the fee based system, as does the person starting on £25k with a 2% yearly rise. The £25k/4% rise person pays ends up paying the amount they would for a £9k a year course. The only major loser is the person who starts on £25k and gets a 6% yearly rise. They end up going from paying £66k to £121k. Of course over the same time period they will have earned £1.98 million pre-tax, so it's not exactly the biggest impact. So ultimately the fee system is a quasi-graduate tax. The main thing that the fee changes do is make people who are poorer after university pay less, those who are richer pay more and have more people paying it almost as a tax.

While the initial headline of "Tuition fees to rise to £9000" seems shocking and has caused students to protest in London, the reality isn't anywhere near as bad. I admit that I was appalled by the idea at first, but on closer analysis it doesn't seem so bad. Ultimately, people who are protesting against the new system aren't protesting for a system that makes it easier for the poor to go to university. The new system will hardly make a difference to that as you still don't pay anything up front. What these people are protesting for, even though they may not know it, is higher taxes for the poor and lower taxes for the rich.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The Mac App Store tag:pilky.me,2010:view/10 2010-10-30T00:00:00Z 2010-10-30T00:00:00Z Martin Pilkington pilky@mcubedsw.com So… that Mac App Store thing. I've been wanting to write up my thoughts on it for a while but just haven't found the time or drive to. Then I read this post by Marco Arment. It's an interesting read, but is from the perspective of an iOS developer and I can't say I agree with much of it.

Ultimately, there are two extremes of are seeing how the App Store panning out:

1. Much more exposure for Mac apps with huge increases in sales, but prices won't change
2. We'll see prices drop to iOS levels and possibly also see the kind of apps that brings

I think both of these are actually wrong to a relatively large degree. I'll start with number 2.

The iPhone isn't a model for the Mac

Personally I think number 2 is completely off key and ignores a very key thing: people don't buy a $1000+ computer to use novelty apps. People use computers for doing things they spend a lot of time on, whether that is writing software, making a movie, doing the family budget, creating artwork, doing homework or playing games. People aren't in and out of apps quickly, then invest a lot of time in them.

The fact is, iPhone apps don't cost much because they aren't worth much in terms of your time. This isn't because of anything the iPhone does, but just the fact of how it is used. It is the "quick reference on the go" device, the "pass the time while waiting" device. It is a mobile phone. How often do you spend more than 10 minutes in an app on a phone? Almost never for apps and not that often with games.

Look at the most popular games through the history of the iPhone. Flight Control, Angry Birds, DoodleJump. These aren't games you sit down and spend hours playing, they're designed around being played in small chunks. The apps that ship with the iPhone aren't any different. Weather: open, flick to city, check weather, out. Maps: open, do a search, get info, out. Notes: open, new note, jot something down, out. Even web browsing on the iPhone is generally a quick in and out process, especially with optimised sites. So the market for the quick reference/time passing app device is already filled by the iPhone and iPod touch. To expand on Steve Jobs' analogy of cars and trucks from a while back, the iPhone is the small town car that you use to run around to various places.

Next up the scale is the iPad. This is where we start to see a change. Prices are higher, apps are more capable. The iPad isn't a quick reference guide, you don't use it to pass the time. This is really where Apple's "device" line ends and its "computer" line begins. It's not a massively powerful machine with a huge screen and lots of expansion aimed at developers, creative professionals, scientists etc who need those things. No, it is the computer for the rest of "us". In the automobile analogy this is the family saloon. Not as nimble for the running around jobs as the town car, but better suited to the longer drives.

Apps for it aren't going to be as powerful as on a Mac, and as a result aren't going to be as expensive. They also aren't necessarily going to take up as much of your time as a Mac app would. Much longer than the 5-10 minutes of the iPhone but not the several hours of the Mac. This is the middle ground and is where I think Marco's post is coming from.

But then we have the higher end: the Mac. In the car analogy this is the truck. Not everyone has or needs a truck, but everyone relies on trucks to some degree as these do a lot of the heavy lifting. Mac apps today focus on tasks that people will spend a lot of time with and I don't see that changing because of the App Store. Sure there will be quite a few new apps that play on the novelty factor, but they won't seriously affect the reason people use a Mac in the first place, rather than their iPad or iPhone. Of course there is one thing that could cripple the Mac App Store to only deal with these apps and I'll handle that later.

The App Store isn't the holy grail of exposure

Lets get this straight, the App Store isn't going to make your app that much more appealing. It will handle selling your app, installing your app and updating your app. Does it provide a place where people can easily find Mac apps? Yes. However, such a place already exists. Click the Apple menu and the 3rd item down is "Mac OS X Software…" which takes you to Apple's downloads page. This provides a catalogue (curated of course) of lots of Mac apps that are available. Of course the issue is, how many people actually use that? Apple doesn't advertise it, but they will advertise the Mac App Store.

Even with the App Store, the requirements on you will be the same. You will need to market your applications well, you will have to find the right markets to advertise to. Apple won't do that all for you, though they can help you along the way with staff picks. I think those who are on the App Store will see an increase in sales, but I think that increase will be somewhat muted compared to what some are suggesting.

Catering for real software

There are a few big issues with the App Store model Apple has created on iOS that grates with how more powerful and expensive software like Mac software is sold. First of all, you need demos so users can try your app. You also need paid upgrade pricing. Unless your initial price is quite cheap, people won't want to pay full price for each upgrade, and developers aren't going to just do free updates for life as the more mature a product gets the more of its revenue is based on these upgrade fees. These are things developers have been asking for on iOS for a long time and they are crucial to the Mac.

These are two limits that could conspire to damage the App Store, and possibly the Mac as a whole. The fact is that without these two features, you're severely limiting how much you can charge for an app. By limiting how much you can charge you're limiting how much a developer can make. By limiting how much a developer can make you are limiting how much they can do. As such developers have to go for lower hanging fruit and create apps that aren't as good as the ones they may want to create.

These limits could cause the Mac App Store to become what Marco see's it as being, and I don't see that as being a good thing as those apps don't have as large a market on the Mac. But given Apple's advertising push behind it, people will start to see it as the main place to get apps and almost a definitive cross section of what is available. As most long time Mac users know, its been a long hard push for both Apple and developers to show that the Mac isn't a "toy computer" but something that you can do powerful things on. The last thing we need is for built in app catalogue full of toy apps.

My App Store plans

So obviously I've thought about how M Cubed will handle this. Initially my thought was "every app on the app store for day one and drop my own store". That is of course the ideal scenario, but the more I think about it, the more cautious I've become. First of all, there are the above issues. I'm not going to want to change my prices much and people aren't going to pay my prices without first trying the app. I'm also likely going to have at least one paid upgrade next year so I don't want to commit an app to the App Store without being able to offer users an upgrade price.

I would also want to migrate my existing users over at no cost to them. Without being able to get them into the App Store system without charging them again, I would have to maintain two versions of the app and keep managing a store, downloads, installation, updates etc. the very things the App Store is meant to handle for me. If I'm going to move an app over to the App Store I want to potentially go whole hog and drop it from my store. I'm happy doing the dual thing for a while but eventually I'd want to simplify things for me and my customers.

Then there is the case of money. I'm getting 30% less per sale if I sell on the App Store. This means to make the same amount I do now I need to sell 36% more copies (taking into account PayPal's cut today), or put another way, for every 10 apps I sell today I need to sell just under 14 on the App Store to get the same amount of money. Now it is likely that the App Store will allow apps to sell more than 36% more copies, but I'm not certain enough yet to throw all my eggs in one basket.

And finally there is a rather big issue: time. I don't have the time to prepare my apps for the App Store and go through the process of getting them approved. I've got a lot to do over the next 6 months some of it on a tight schedule, so I don't really have the ability to drop everything. In a way this is a good thing as it forces me to take a trickle approach to the App Store. I can slowly submit my apps one by one and make decisions based on what others have experienced.

One thing is for sure, this is the most significant thing to happen to Mac software since OS X was released back in 2001. I just hope Apple is willing to acknowledge the needs of developers rather than pushing ahead regardless.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

On Syntax Flexibility tag:pilky.me,2010:view/9 2010-09-03T00:00:00Z 2010-09-03T00:00:00Z Martin Pilkington pilky@mcubedsw.com It's no secret that I'm not a big fan of Ruby's syntax. I've grown to love Ruby's functionality and I would like to see language support for some of it in Objective-C (symbols, non-alphanumeric method names and modules/mixins come to mind). I'm also not too opposed to the syntax of Ruby per say. It is concise and fairly readable.

The issue I have is the huge amounts of flexibility there is in the syntax. A bit of flexibility is nice, especially for conciseness. I wouldn't mind the ability to define arrays, dictionaries, sets and numbers in a shorter syntax in Objective-C. The issue I have is where flexibility causes ambiguity, especially when something is completely identical. This is why I have an issue with the dot syntax in Objective-C, as it looks identical to struct access and adds ambiguity.


I had a discussion with someone over IM about this and I gave an example. Take this line of ruby:

validates :title, :uniqueness => true

Harmless enough. But it is completely identical to these 5 as well:

validates(:title, uniqueness: true)
validates(:title, :uniqueness => true)
validates(:title, {:uniqueness => true})
validates :title, uniqueness: true
validates :title, {:uniqueness => true}

Now the person I was talking to fired back with an example from C/Objective-C, showing 5 ways to write the same statement:

if (blah == 3) {
	return NO;
} else {
	return YES;
if (blah == 3)
	return NO;
	return YES;
if (blah == 3)
	return NO;
	return YES;
if (blah == 3) {
	return NO;
else {
	return YES;
return (blah == 3 ? NO : YES);

He also missed the basic "return blah == 3". Now at first glance he has a point, 6 ways including the one he missed to write the same thing. There are some issues with the argument though. While they are all logically identical they aren't functionally identical. To give an example of the difference, the following two statements are logically identical but functionally different:

for (NSUInteger i = 0; i < 5; i++) {
	NSLog(@"%d", i);
NSUInteger i = 0;
while (i < 5) {
	NSLog(@"%d", i);

Logically, they are both the same, they are looping through until i reaches 5, printing out the value of i and then incrementing it each time. However, one is a for loop and one is a while loop, the are completely different constructs. There is also no ambiguity as to which one is which.

Now 3 of those 6 Objective-C examples are not only logically and functionally identical, but minus whitespace they are completely identical, so we can throw 2 of those away. The two single line ones are functionally different as well, though they are logically identical. This means we have two logically and functionally identical items with different syntax:

if (blah == 3) {
	return NO;
} else {
	return YES;
if (blah == 3)
	return NO;
	return YES;

Now, I'm sometimes guilty of leaving out curly brackets on one line statements myself, but the issue is that it can cause ambiguity and therefore bugs. I could add a second statement and it would be fine in the first version but could break in the second unless I remember to add the brackets in. While it may annoy some users, it wouldn't affect me in the slightest if curly brackets were required in all cases and you got a compiler error if you miss them out.

Ruby's Ambiguity

validates :title, :uniqueness => true

The issue with the above statement is that it doesn't fully state what is what. It is a list of words. Some people may feel differently but to me it is trading a little bit of conciseness in order for ambiguity. Sure, a seasoned ruby expert could tell you that it is a method with two arguments, the first of which is a symbol, the second of which is a hash. But the issue is that the code doesn't explicitly say that. It gets even worse if you have multiple items in a dictionary:

method :symbol, :key1 => true, :key2 => "foo"

At first this looks like a method that takes 3 arguments. In fact it is one that takes two arguments. Written in an unambiguous way it would be:

method(:symbol, {:key1 => true, :key2 => "foo"})

Convention Over Configuration

What prompted this post is the response I got to a tweet where I said that the fact that Ruby has so much optional syntax makes it feel like an app with a massive prefs window. Ideally as a software designer, you should be aiming to use sensible defaults rather than adding a preference. Adding a preference should feel like a cop out. The ideal number of preferences in any app should be 0.

Ruby feels like it goes in the opposite direction at times. It adds a preference because it might be nice for some people to do it that way. It seems ironic that it then has a framework which is built upon the core principle of convention over configuration.

Now yes, some people will say "well don't use Ruby then" or "just use the syntax you want". But the thing is that Ruby is a very good language under the hood. It is incredibly similar to my favourite language: Objective-C. So much so that it can run on top of Objective-C's runtime. Rails is also a very good API and has some very smart people behind it so it is worth using. While some people I know dislike Objective-C, they put up with it because they enjoy Cocoa and believe it to be a very good API. This is no different with me and Rails.

It all comes down to the syntax and the fact that many people use different forms of it. I prefer to code explicitly, so that the meaning of the code is absolutely clear and if this means being a bit more verbose then so be it. Others don't like to be that way, and while I don't aim to change their mind I do feel I need to have a little whinge about it so I can explain my position and why I'm whinging in the first place.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Defining Discrimination tag:pilky.me,2010:view/8 2010-08-19T00:00:00Z 2010-08-19T00:00:00Z Martin Pilkington pilky@mcubedsw.com The title of this post is a little misleading. I'm not aiming to define discrimination, that has been done already. What I'm wanting to do is put down my definitions of what should and should not be illegal to discriminate against. This isn't to say what is right or what is wrong, there are some forms of discrimination I disagree with but think shouldn't be illegal.

I've been wanting to write this article ever since I heard the response to the incident in the UK back in March, where a Christian couple who ran a B&B turned away a gay couple. For those who haven't heard about it, or forgot, you can read about it here: http://news.bbc.co.uk/1/hi/england/8578787.stm. The whole story brought up an interesting question though: if turning away someone because they are gay is discrimination, then surely forcing a christian couple to take in a gay couple is discrimination against that couple?

What you are vs who you are

What am I? I'm a 6ft 22 year old white, straight British male with no disabilities. Who am I? I'm a fairly liberal minded, agnostic atheist computer programmer who likes rock/pop/alternative music and supports Blackburn Rovers. This is a key distinction. I can't change what I am, those are facts that have been affected by my DNA and environmental factors outside of my control. I did not choose to be white or straight or British or male. Who I am though is full of choices. I choose to be liberally minded, I choose not to believe in an all powerful deity, I choose to support Blackburn Rovers. It is comparatively easy for me to change any of those things, the only thing that prevents me would be stubbornness.

Now this is key to where I think the line should be drawn on legality. And that legality should be absolute bar two exceptions which I will go into in a bit. Ultimately I believe that discrimination on what you are should be illegal, but discrimination on who you are should be legal, though whether it is acceptable is a different matter. Discrimination based on your physical features, age, ethnicity, gender, sexuality, nationality or ableness should be illegal. Discrimination based on your political persuasions, religious beliefs, personal interests or what sports team you support should be legal.

The two exceptions to the illegal nature of discrimination are fairly basic ones:

1. The one job where discrimination on what you are is allowed is casting for acting/modeling, where if you are casting for the part of a woman, it is acceptable to reject any males who audition. However, this should be limited purely to the bounds of what the script requires. If the script only requires a woman then a woman of any ethnicity should be equally considered.

2. The one universal case where it is allowed is safety. It is ok to say that a blind person cannot drive, as to do otherwise would put that person and the wider public in danger. It is ok to say that a person who is too short may not go on a roller coaster, as it would put them in danger.

Rights vs fundamental freedoms

So surely being able to legally discriminate against someone based on their political or religious beliefs encroaches on their rights to freedom of speech, religious belief and political belief? Well no. There is a large difference in your right to these freedoms, and them being fundamental freedoms that are universal, no matter the situation. I'm allowed to say what I want, but that does not mean that I can say it where I want to. I have the choice of believing what I want religiously or politically, but that doesn't mean I can impose those beliefs on others.

If I turn around and say, I would not hire someone who is a member of the British National Party, I'm not removing their freedom to believe in the politics of the British National Party or to say what they like. I'm removing their freedom to work for me and be a member of the BNP at the same time. I believe such things should be up to a person or organisations and so should be legal. If that person really wanted to be hired by me, they could choose to give up their membership.

That said, I believe that such forms of discrimination should be used in only extreme circumstances. I personally would not reject someone based on their religious or political affiliations, unless they are completely unacceptable or illegal, but at this point they are often far distorted from any recognised form of religion or politics.

Yes these rights should be looked after, but not at the expense of the truly fundamental freedom, that of everyone being treated equal based on what they are. Someone's right to believe that homosexuality is a sin is trumped by the fundamental freedom of a gay person to be treated equally based on what they are. Someone's right to believe that black people are inferior is trumped by the fundamental freedom of a black person to be treated equally based on what they are.

Rights are very specific, outlining core things that a person should be allowed to do. But rights are made specific so they don't encroach on this fundamental freedom of basic equality, and they are always subject to it. Basic equality is the ultimate goal of many societies. Most people agree with the ideal that you should be judged on your choices and what you do, rather than things entirely out of your control, and this is where the distinction should be made upon what should be legally allowed discrimination and what should be illegal discrimination.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Reply To A Scam Artist tag:pilky.me,2010:view/7 2010-08-16T00:00:00Z 2010-08-16T00:00:00Z Martin Pilkington pilky@mcubedsw.com I put my old iMac for sale on eBay yesterday. A few hours after it going on sale, I saw that someone had bought it at the "Buy Now" price. I got a bit suspicious as they had only signed up that day and had no feedback. My suspicious were confirmed this morning when I received emails from eBay saying that the buyer had left the site. All credit to eBay, I was able to get through to someone on the phone within a 90 seconds of calling (despite them saying there was a large number of calls) and get my fees refunded so I could re-list it.

Anyway, I checked my spam folder on a whim a little while ago and found some emails from the fake buyer. Turns out they had sent money to a PayPal account (that no longer exists) and were wanting me to ship the iMac to Nigeria. I could have just left it, but I thought "why not have a little fun". So here is my reply:

Scam reply

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Retina Displays And Resolution Independence tag:pilky.me,2010:view/6 2010-08-06T00:00:00Z 2010-08-06T00:00:00Z Martin Pilkington pilky@mcubedsw.com There is a misconception that in order to get the effect of the retina display on the iPhone you need a display that is 326 dpi (dots per inch), regardless of the device. Now retina display is a marketing term of Apple's, but for the purposes of this blog post I'm going to define a "retina display" as so:

A display where a person with 20/20 vision is unable to tell apart individual pixels at a normal viewing distance

Now these are important for calculating what defines a retina display for various devices. All a retina display is doing is taking advantage of the limited resolution of the human eye, much in the same way that a film takes advantage of the limited frame rate of the human eye. But whereas with motion, the human eye can only see roughly 24 frames a second no matter what, with motion its limit is a function of the distance to the object and the quality of the eye. As such, if you put an iPhone 4 right up to your eye, you can still make out individual pixels.

For a good explanation of all of this, and for where I got the basis of my calculations, I highly recommend reading this blog post: http://blogs.discovermagazine.com/badastronomy/2010/06/10/resolving-the-iphone-resolution/

Calculating the DPI

From that blog post we can gain the scale factor of 3438, which is based off 20/20 vision (note this isn't perfect vision, but few people have eyesight that good). We also learn that we can calculate the dpi needed for individual pixels to be indistinguishable at a certain distance. I am going to calculate this for 4 devices: an iMac (desktop monitor), a MacBook pro (laptop monitor) and iPad (tablet) and an iPhone (smart phone).

The calculation is pretty simple. To get the retina display effect dpi "x" for a screen "n" inches away we just do:

x = 3438/n

So using this calculation and what I consider are comfortable and reasonable distances I've calculated the minimum resolution at which pixels are indistinguishable. Remember, this is for someone who has 20/20 vision, so for a lot of people this figure may be lower.

DeviceDistanceDensity Required
iMac24 inches143dpi
MacBook Pro18 inches191dpi
iPad15 inches229dpi
iPhone12 inches287dpi
As you can see, it isn't that much in many cases. The iMac is only about 30-40dpi away. And the iPhone is actually about 40dpi past what is needed.

Resolution Independence

Of course just having a higher density screen won't make a difference. Doubling the density of the iPhone would have just made it unusable, Apple had to add resolution independence, allowing for twice the pixels to render the same item on the screen.

Some things are already resolution independent by being vectors, text for example. If you were to put some text on an average desktop monitor (~95-100dpi) and put the same text on a high res laptop monitor (eg a 17" MBP at 1920x1200, or 133dpi) and set their font sizes so that the characters are physically the same height on both monitors, you will notice that the text looks far nicer on the high res display, because it is able to use more pixels to render the same thing.

So what does this mean for an app in general. Well at the moment the resolution independence in OS X is quite buggy, and has actually regressed in 10.6. But we can still get a general feel as to what resolution independence would do for quality. Lets assume the next screenshot is taken at 94 dpi, the density of my previous 24" iMac.

TextEdit document at scale factor 1

Now if we scale this to a scale factor of 1.25, this gives roughly 117.5dpi. Assuming my current iMac was 117.5dpi rather than the 110dpi it is, they would have the same physical size. But as you can see from the screenshot below, various elements would be higher quality, such as the text, the menus and the close/open buttons.

TextEdit document at scale factor 1.25

Of course some things don't look better at the moment, such as the menu bar, but these are bugs. Ultimately these would have to be fixed in order to reach the pixel densities needed for a retina display effect, as increasing the resolution that far, without increasing the scale factor, would lead to eye strain as we try to concentrate on smaller and smaller items.

You don't need high pixel density to get a similar quality display to the iPhone 4, if you aren't building a smart phone. We are very close to the pixel densities needed for the retina display effect, especially on the desktop. As prices of higher density displays come down, they will make their way into consumer goods. When mixed with resolution independence we will start to see displays in all devices that appear as good as the iPhone 4, without the need to wait for them to treble the resolution.

This isn't to say that if you had 2 displays side by side and one is much higher resolution than the other, that it wouldn't look better, but to match an iPhone 4 at 12" away on a desktop, you don't need anywhere near as high a resolution.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

On Illness, Both Mental And Physical tag:pilky.me,2010:view/5 2010-08-04T00:00:00Z 2010-08-04T00:00:00Z Martin Pilkington pilky@mcubedsw.com My health hasn't been all that good the past 3 years. Up until 4-5 months ago I was suffering from a mental illness. And now I have a physical illness which is technically even worse than what I had before. Oddly enough though I've been quite open about the physical illness and fairly quiet about the mental illness. This blog post will change all that and give details about them both, what they are, how they affect me and how you can get help if you need it

Physical: Testicular Cancer

About 6 months ago I noticed that one of my testicles was larger than the other. I simply put this down to never noticing before and apparently it can be natural. However, about 6 weeks ago I started suffering pain and swelling in the same testicle. Pain which also extended up into my abdomen and made me feel ill. The following day I went to see the GP.

I couldn't get in to see one of my regular GPs, one was fully booked and the other was away, so I saw a stand in GP. She examined me and determined it was most likely an infection and gave me a course of antibiotics. Unfortunately after I finished the antibiotics I was still in pain and the swelling hadn't gone down, so I went back to the GP. It was the same GP and she said that the swelling could take 2 weeks to go down.

So about a week and a half later I'm still aching and the swelling hasn't gone down one bit, so I go back to my GP. This time the GP that was away has returned so I go and see him. After I explain the story so far and he does a quick examination he refers me to a consultant urologist to find out the cause. This is where one of the NHS targets kicks in (one of the ones backed by medical suggestion): they will aim to have you seen by a consultant within 2 weeks of referral if cancer is suspected.

The following week I go and see the consultant, he examines me and then sends me for blood tests and an Xray. The following week I go back and am informed that they suspect it might be cancer and want to perform surgery to remove the testicle. I'm also sent for a CT scan to check if it has spread in ways the Xray didn't show (let me just say the the contrast dye they put in you for a CT scan is one of the weirdest things).

So the next week I'm booked in for surgery to have the testicle removed. It's only a day surgery, I arrived at 7:30am and was out around noon. In place of the testicle I had a silicon replacement put in. Such a thing had never occurred to me as existing until they told me about it, but if they can offer a silicon implant for women who have had a mastectomy it is only logical the same exists for men who have had an orchiectomy.

So they take the testicle out, both as a precaution but also to test it, as trying to get a biopsy of part of the testicle could cause cancer cells to spill out. One of the good things about testicular cancer is the main tumour is nicely contained in a relatively non vital organ that can easily be removed with little side effects.

And so we arrive at last week. On the Tuesday I went to have another blood test. The tests look for specific chemical markers given off by tumours. The test from before the surgery showed high levels of these markers, but in theory subsequent tests should show these markers dropping over time.

On Thursday I had another meeting with the Urologist. I was informed that it was indeed cancer and that it looked like a mixture of the two types of testicular cancer, which they said wasn't uncommon. However, I was also informed that my blood test from earlier that week showed a drop in the marker levels and my CT scan was clear, barring a few swollen lymph nodes. Now this can be a sign of the cancer having spread, but it can also be a sign that I've had an infection or many other relatively benign causes.

On Friday I went for my first meeting with the Oncologist. She also took a look at the CT scan and said that it is nothing to worry about at the moment. I was informed that at this stage I don't need any extra treatment, but I will be under observation for the next 2 years, with checkups every 4 weeks for the first year and every 8 weeks for the second year. The reason for this is that there is still a chance the cancer will return, requiring additional treatment. I've got more diagnostic tests to go through over the next few weeks to double check those lymph nodes and make sure my markers are coming down as expected.

My Reaction

My reaction to all this has been a mixture of humour, curiosity and indifference. Some people when informed they have cancer would break down, and for some cancers this is perfectly understandable. Testicular cancer though is about the "best" cancer to have, given its high survival rates. I've seen figures of 95-99% survival rates. You may think "but there's still a 1-5% chance of dying of it" but consider this: the Spanish Flu epidemic of 1918 had a survival rate of around 97.5%. This form of cancer today, is to a degree no more deadly than the flu 90 years ago.

The other factor in my reaction is how much it has affected me. I've had a bit of aching pain prior to surgery, pain and swelling after the surgery and a load of needles put into me for various things. But in terms of affecting my life it has been relatively minimal. I've had colds that have affected me worse.

My Advice

My advice is the same as all the medical advice. Check yourselves regularly, and if you find any abnormal lumps of swelling or you feel any pain then go see your GP immediately. Statistically it is more likely to be a benign cyst or an infection. Both of which are easily treatable. But on the off chance that it is cancer, it isn't worth the risk. The sooner it is caught the better, as it can potentially spread to your lungs and brain. If it is advance stage though it is by no means fatal. It is this advanced stage of testicular cancer that a certain Lance Armstrong had and he recovered.

Mental: Anxiety, Panic Attacks and Agoraphobia

I haven't really talked publicly about this before. In my first year of university, around my first exam period, I caught a rather bad stomach virus. The evening before my first exam I was throwing up. I managed to make it to my first exam despite feeling ill but my second exam at the end of the week was a no go. I went round to my friends and was weak, shaking and retching. I managed to make my way to the doctors to get a sick note and was told to go back to bed.

That was where I think the start of my illness began. The next exams I had I felt ill for. Before the exam and prior to starting the exam, was feeling ill. But once I got into the exam and started it I was fine. I was also fine for the rest of the day. But then it started spreading to over events. I felt ill about going to the cinema and one time while I was in the cinema I felt really ill and so had to come home.

This prompted me to go see the doctor. I mentioned my symptoms of feel nauseas and tired and he put it down to an intestinal problem, possibly caused by both my stomach producing too much acid and my previous illnesses. I was given some tablets and carried on. However, the symptoms didn't go. I just put it down to it being something you can only manage and not really treat. But I didn't really notice too much that the symptoms were happening for more and more events.

Then I started my 3rd year of university. I went back to my student house, settled in and then went to go out to do some shopping. I really struggled to head out. I made it but I felt ill. Then came to lectures and I started heading out to my first lecture but I couldn't make it. Part way there I gave up, went back home and then made a doctors appointment. The following day I went to the doctors, with incredible amount of effort for a journey I'd done many times, especially as the doctors was fairly close to where I had lectures.

I was diagnosed with having panic attacks and given something for the symptoms and referred to a mental health advisor. A few weeks later I was also put on some medication to help control the panic attacks, a mild anti depressant. Over the following 18 months I slowly got better. I discovered that it was simply due to my brain becoming confused, invoking the instinctive "fight or flight" response in situations where there is no danger, and I had to recondition my brain back to normal.

Panic attacks are caused by a downwards spiral of your emotions, your behaviour, your thoughts and your environment. Each affect each other, causing you to associate an event with something bad. Gradually your safe zone reduces, to the point where you can hardly go beyond certain places without panicking. This is agoraphobia. For some people it is even worse, and they can't leave their bedroom without suffering an attack.

The cure is to change break the spiral. This can initially be hard. If you panic as you go outside then stay there for a while. Get past the panic, which will only last 5-15 minutes. Take deep breaths and prove to yourself nothing bad will happen. Then go further and further and further each time. In about 8 months I went from being unable to go outside without panicking to being able to get a train on my own all the way to NSConference.

But as you start to change you notice changes in how you react. My initial reaction was "I can't make it, I'll feel ill and probably throw up on the train, or I'll miss a connecting train due to it being late or me not finding it". What you think is actually very logical, the issue is that your initial assumptions are wrong. Take missing the train due to not finding it. There are two assumptions you can make:

1. You can assume that there will be no signs or porters to point you to where you want to go
2. You can assume that there will be lots of signs and porters to point you where you want to go

Now most people would say that 2 is far more likely than 1, to the point of 1 pretty much never being the case. To that degree it isn't unreasonable to based your logic off assumption 2. The issue is with panic attacks, you start off with assumption 1. That is the reasonable assumption. Same with feeling ill, odds are if you feel ill you won't throw up, but you decide to assume that you will. When you are cured and you look back, you see the logic you go through and you understand it, but you see that your assumptions upon which the logic was based were wrong.

My Reaction

My initial reaction was that I'll be like this for the rest of my life, unable to head outside. Slowly that changed and I was able to see when I'd be able to move around again freely. Agoraphobia and panic attacks are horrible things to have, as they can serious affect your life. They affect your work, your entertainment and your relationships.

My Advice

Go see your GP. If you really cannot head out then arrange for a house visit. You can't cure yourself alone. But that said, you don't need medical advice or drugs to help you. You just need support from others. And surprisingly, it isn't hard to find others who have suffered from this. In the US about 2% of adults suffer from agoraphobia. I follow over 300 people on Twitter, which means that 6 of those people will suffer from agoraphobia at some point. There will likely be more who suffer from general panic attacks. I encountered two friends who I'd known for years, but only found out they also had suffered from panic attacks.

It's also important to point out that panic attacks and agoraphobia, while linked, are not the same thing. A panic attack is your body panicking in a situation. Agoraphobia is the fear of having a panic attack in a certain place. Often this does manifest itself in a fear of open spaces and leaving home, but it doesn't have to be that. You can suffer from Agoraphobia while you feel fine going out and about, but fear having a panic attack when going into a shopping centre.

The other advice is quite simple, but often the hardest. Force yourself to go where you don't want to. If you have a panic attack then start taking deep breaths. But also take it in baby steps, go at a pace that suits you, but make sure that pace is heading forwards not backwards. I'm not cured of panic attacks, but I can control them now, rather than them controlling me.

And a suggestion for those who have read this far, whether you suffer from agoraphobia or panic attacks or not. By chance I met a woman on twitter called Jen Lant. She has been writing a blog about suffering from agoraphobia and her attempts to cure herself. She is also the one who inspired me to write this post.

A lot of her thoughts and emotions match mine when I suffered from it and it also shows some of the strain it can put on relationships. If it helps someone suffering from panic attacks or agoraphobia realise they aren't alone or it helps someone understand what it is like to go through panic attacks and agoraphobia so that they can better help someone who suffers, then it is worth it.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

32 Bit is Dead, Long Live 64 Bit! tag:pilky.me,2010:view/4 2010-06-26T00:00:00Z 2010-06-26T00:00:00Z Martin Pilkington pilky@mcubedsw.com OK, let's be frank. 32 bit is dead on the Mac. Apple cut out most of their 32 bit users when they dropped PPC support in Snow Leopard. Those that remain are 3-4 year old machines, so are likely to be replaced in the next 12 months given the 3-5 year upgrade cycle most people have. 32 bit is a legacy platform and should be treated as such.

Given that the last 32 bit machines were sold only a year after the last PPC machines, I would not at all be surprised if OS X 10.7 dropped 32 bit completely. Like PPC it is unnecessary cruft when the future (and several years of the past) is entirely 64 bit. Given this, I'm also phasing out 32 bit support for M Cubed's apps as I move them to be 10.6 only. The first of these will be Lighthouse Keeper 1.2.


I am under no illusions that this will be an easy move. I'm effectively dropping 3 platforms in one go: 32 bit Intel, PPC users and 10.5 users. But the fact is that those are all platforms that are rapidly shrinking in size. Probably the one that will cause the most pain to my users is dropping 10.5 support, I'd wager there are more 64 bit Intel users of 10.5 than there are 32 bit Intel or PPC users, even at this stage. But every new Mac ships with Snow Leopard and it is only a $29 upgrade.

There are also difficulties in terms of marketing. How do you inform a user what platforms something will run on? Do they know if their machine is 64 bit? What if I list the processor brands that are 64 bit, will they know that? What if I tell them where to look, am I asking the user too much? How about instead telling them what won't work, or tell them that Macs sold after a certain date will work? There are flaws in many of these and yes it will be fairly messy. It could even be seen as a big risk to take, doing this now rather than waiting until Apple drops 32 bit support. But that is a risk I'm willing to take.

Comparisons to PPC

What PPC Macs are to Intel Macs, 32 bit Macs are to 64 bit Macs. However, there are some differences from a technical point of view. Firstly, dropping PPC support gains you relatively little as a developer. Unless you're dealing with something low level your code is generally identical for both platforms, you are basically just reducing the size of your shipping app and your testing requirements. And lets be honest here, how many of us have been testing on PPC as much as we have on Intel? Most of my PPC testing involve booting my computer into OS X 10.5 and running my app in Rosetta, not exactly ideal.

Where this comparison is different is in the code you can write. By going 64 bit you can use the modern Objective-C runtime. This gives you lots of new language features, including some that make your code more future proof. Many of these features mean you can write less code. Code you don't write is the easiest and fastest to debut, test, document, read and support. There are also features that any app that runs in 64 bit mode (even those universal apps) get, such as access to more memory, increased processing speed and better security. Things that require 64 bit are just going to get more and more common.

But why drop 32 bit?

I could get the access to memory, processing speed and security features just by having a 64 bit mode in my app. So it seems like my argument is just wanting a few new language features for myself. Well in a way it is that, but it also comes down to two beliefs I have about software development, one of my own and one taken from Wil Shipley:

1. You should remove legacy and unused code from your application as soon as possible
2. People are going to be buying the latest hardware/OS before they buy your app

I re-enabled the recording of system profile info sent to M Cubed's server when a user who has opted in checks for updates. It will take a week before I get the stats that are going to come in, but so far all of the submissions are using 64 bit Intel Macs. Even if the odd few 32 bit macs pop up, it would still mean that I am writing, compiling and shipping code that almost nobody is using. Why have that code polluting my codebase?

Aren't there lots of 32 bit Intel Macs out there?

You would think so, but Apple didn't actually sell that many of them. Apple started moving from PPC to Intel in the first half of 2006. In the second half of 2006 they were moving from 32 bit to 64 bit. This is how long Apple sold 32 bit Intel versions of each Mac:

  • iMac: 9 months
  • Mac mini: 18 months
  • MacBook: 6 months
  • MacBook Pro: 10 months

Doing some VERY rough calculations, assuming that all Macs sold until October 2006 were Intel Macs (they weren't), 15% of Mac sales in the first half of 2007 were Mac minis (I seriously doubt it is that high) and all of those 32 bit Intel Macs are still actively used (they won't be) then you get a figure around 4.78 million 32 bit Intel Macs sold. Of course this is out of around around 37.5 million Intel Macs sold between January 2006 and April 2010. That would mean that 13% of all actively used Intel Macs are 32 bit machines (and that is having all 3 figures making up their total being over estimates).

Of course in the next few weeks we'll get the 3rd quarter results from Apple and I would be surprised if they didn't sell 2.5 million Macs or more, so if we add that to the total Intel Macs sold then we get 12%. Assuming they sell 3 million Macs again in the 4th quarter that drops to 11% and if they do 3.3 million Macs in the 1st quarter of 2011 again then we have 10%. Basically every quarter 32 bit Macs are accounting for a whole percentage point less.

And the above figures all assume nobody is replacing their 32 bit Macs. Even if we be very generous and assume that only 30% of those Macs have been replaced, then we get them making up 9% of Intel Macs. There are more people using Tiger right now than are using 32 bit Intel Macs!

Ultimately, 32 bit makes up less than 10% of people able to run Snow Leopard. Odds are that number is even less if you took the subset that actively buy Mac software. If you are writing new software, or re-writing existing software, then there is little reason to support something that is rapidly fading into irrelevance and may be dropped entirely from OS X in the next 12-18 months.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Keep It Simple Stupid tag:pilky.me,2010:view/3 2010-06-04T00:00:00Z 2010-06-04T00:00:00Z Martin Pilkington pilky@mcubedsw.com So yet again I saw a tweet about the impending death of the Mac in favour of the iPad and yet again I feel the need to blog my answer rather than have 10 conversations about it on Twitter. Here is the tweet:

RT @joehewitt: You'll know the Mac is officially dead when Apple releases Xcode and Final Cut Pro for iPhone OS. <- +1, we're on that path

(NB: From the 3 posts so far on this blog (including this one), you'd assume I have something against Joe Hewitt, given that two of them are arguments against things he's said. I respect Joe, but I disagree with him a lot about his views on the future of the iPad and Mac.)

OK, so I'll flat out state that Xcode and Final Cut Pro will not make it onto an iPad (nor will Photoshop, Word etc) without being either less powerful or less productive. This isn't a case of what the SDK is capable of (though Apple would need massive exemptions to App Store policies for an Xcode iPad app) or what the iPad hardware is capable of. It is simply a matter of user experience.

These are very large and very powerful applications. They do a lot of stuff. To some degree some don't do enough (I'm looking at you Xcode). These applications just aren't well suited to the iPad.

Less is More

For the vast majority of people less features, less UI, less clutter results in more fun and more productivity. As such iPad apps tend to be less powerful than their desktop counterparts. A lot of this comes down to the form factor. A larger screen makes it easier to manage larger applications. Yes, you could manage Xcode or FCP on a 1024x768 screen, but it wasn't that good an experience.

The fact is that very powerful applications don't work well on small screens. The larger the screen the more powerful an application can potentially be. That isn't to say it should become more bloated, but it opens up the possibility.

Take a look at applications like OmniGraffle or Keynote. They can be done on the iPad, but are more limited than their Mac counterparts. However, there is no way they could be done well on the iPhone. It is far too small a screen.

Xcode to iPad

So what would an iPad version of Xcode look like? I think possibly something like this:

On the left you have your files or you can go back and view targets, breakpoints, executables etc. On the right you have your code view and some toolbar buttons for common actions. So far so good. Unfortunately there are other things that needs to be displayed:

  • Build settings
  • UI editing
  • Build results
  • Debugging
  • Static Analysis
  • Refactoring

And that is just a few things that need adding, there are many more. You can do the same with Final Cut Pro or Photoshop or Word. You port the UI to the iPad but end up having to hide the vast majority of stuff, which would be OK but there is no concept of menu shortcuts to access them.

80/20 Rule

I think this sudden sentiment that the Mac is a dead man walking is incredibly misplaced. These people are ultimately idealists and it would be nice if it was possible for things be like they want them to. But I'm a pragmatist and so I look for the practical solution. I do think that Xcode, Final Cut Pro, Word, Photoshop etc will find their way onto the iPad to some degree. They will in no way make their Mac counterparts irrelevant though.

The personal computer has long tried to be the 100/100 device: everything to everybody. To some degree it has succeeded, but it has led to something that is too complicated for most people to want to use.

I see tablets like the iPad as becoming 80/20 devices. 80% of people only really care about 20% of the capabilities of a personal computer and I believe tablets will ultimately fill that role. Photoshop could appear in a very basic form (think Elements with some stuff taken off) and Final Cut Pro will probably appear in the form of iMovie.

Back when the iPhone first came out, we were told that it is best to do a lot less, but do it incredibly well. I believe that still stands for the iPad. The iPad version of iWork will never match the capabilities of the Mac version, but it doesn't need to. If it does the core stuff really well then that will be enough for most. As Pages is the bits of Word most people care about done in a more refined way, the iPad version of Pages will become the bits of Pages people care about done in a more refined way.


Now, despite all that, there is a way for the iPad to potentially have large, powerful applications and that is to cheat. You may remember this patent from a few years ago about an iMac like docking station. Essentially an iMac with the brains taken out and a slot for a laptop to go in. The dock potentially provides extra ports, a larger screen, better speakers, more storage etc.

Now imagine that instead of a laptop, we had an iPad. You are merrily working in the cut down version of Pages but then reach a point where you need a bit more power. Maybe a larger screen, hardware keyboard etc. At this point you go to your desk where your dock is, pop in the iPad and the dock's screen lights up. There is a more powerful version of pages with your document exactly as you left it.

Now the dock may need to hold better graphics, CPU, more memory etc. Basically it could be an iMac but with the OS and applications stored on the iPad. These more powerful versions could be better designed for larger screens and possibly even support extra input devices like mice or drawing tablets. And this capability isn't a pipe dream, we already have this with iPhone/iPad universal apps. It is merely putting another level on.

Now I seriously doubt that any such device would show Mac OS X. Similarly, it couldn't really work exactly like an iPad. I think a reasonable idea is a UI like 10/GUI, which is one of the best multitouch desktop concepts I've seen and would fit in rather neatly into the iPhone SDK's view of things (eg that windows aren't really important anymore).

And finally we have solved one of the greatest practical technology problems of the modern era with a very elegant solution. People won't need laptops that are too heavy for travelling and more expensive than desktops for the same power, with lots of wires that need plugging in when you want to work as a desktop machine, just so they don't have to keep data in sync between two machines. You'll have your desktop dock with the extra power and everything plugged in and your light and highly mobile iPad that you can just pop in or out and go.

Form Limits Function

Ultimately, the iPad's form factor limits what it can do. The screen is a certain size, it has touch input, it has a software keyboard. This isn't the perfect form factor and it can't do everything, nor should it. However, it is probably the best form factor for what most people want to use computers for.

We shouldn't be pushing for the iPad to get as powerful as a desktop. Too many people feel that A has to win out over B and there can't be any other way. They fail to see that B has advantages too and often ones inherent in it's design. To add those advantages to A would require losing the advantages that A already has, essentially making it another B.

Consider the Mac a hammer and the iPad a screwdriver. Just because we've been hammering in screws all this time doesn't mean that, now we have a screwdriver, it is perfect for knocking in nails and we don't need the hammer any more.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

The New (Tech) World Order tag:pilky.me,2010:view/2 2010-06-03T00:00:00Z 2010-06-03T00:00:00Z Martin Pilkington pilky@mcubedsw.com There was a short twitter conversation between @rentzsch and @joehewitt last night about the decline of the Mac market due to the iPad. It was started by this tweet:

If the Mac market is going to shrink to the size of the current market for Mac Pros, perhaps that will be the only model they keep alive?


I don't agree with these tidings of doom for the Mac. The fact is, a desktop computer is still the best job for many tasks, and a touchscreen tablet wont replace them unless it becomes a desktop computer. The iPad form factor/input mechanism is inherently flawed for certain tasks, where the desktop excels at them.

Price = Time

I believe that with computing devices (PCs, tablets, smartphones etc), the price you pay is linked to the amount of time you will spend with it. Smartphones are generally the cheapest and you spend relatively little time with them. Usually a few minutes, maybe 10-15 minutes tops. There are two reasons for this. The first is that by definition, it is a travel device, and while travelling you generally don't need to sit down for long periods and do something, you need some information quick. The second is that it is too small to do any real work or entertainment on.

Tablets, assuming they all follow the basic form and input of the iPad, are in the middle price wise and also time wise. You will often spend 10-15 minutes on it in one go and anywhere up to an hour or two if you are watching a movie, reading a book or creating something. However, it is very rare that you will use it for over two hours at a time.

PCs are at the high price range (or they will be when tablets take off). You often spend anywhere from an hour or two up to an entire day in front of one. You usually spend a lot of time working on them or playing on them.

So that is where I see the three devices. I've already explained why the smartphone isn't used much, but for why the tablet and PC are where they are takes a bit longer. Ultimately though it is down to a few things: ergonomics, power, accuracy.


If you are working anywhere for a long period of time you need an ergonomic workspace. I know this as well as anyone after suffering a bad case of RSI when I was 17/18. This was largely down to me working on laptops, where the screen was low, the keyboard was a bit cramped and the edge of the laptop dug into my wrists. Since then I have moved back to using a desktop as my primary machine, switched to an ergonomic keyboard and got a decent mouse.

Tablets are far from ergonomic. You can get into a position where you are comfy consuming (reading a book, surfing the web, watching a movie) quite easily, but for creation is is a bit harder. They are worse than laptops in that the input and the screen are even closer together. You have a choice between your arms being in an uncomfortable position and your neck being in an uncomfortable position.

There isn't really a good solution. You could wall mount it at eye height and use a bluetooth keyboard, but then your arms have to reach up to touch. You could use a keyboard dock but your head has to tilt down. There is no good way to create for long periods of time on the iPad. However, for creative tasks in short bursts it can work very well (eg writing a song where you every so often use the iPad to jot down the notes and/or lyrics).

The ergonomics of the PC have been well honed over many years and there is lots of advice and lots of products out there to help you. It became important because of how long people spend in front of a computer.


Basic law of technology: new technology costs a lot to make and isn't very efficient. This is why all new technology goes into the high end products before trickling down to the lower end. Faster processors go in large towers before they get cheap enough and efficient enough to go into laptops. This is why the current Mac Pro allows up to two quad-core 2.93GHz processors, the current iMac up to one quad-core 2.8GHz processor, the current MacBook Pro up to one dual-core 2.6GHz processor, the iPad a single-core 1GHz processor and the iPhone 3GS a single-core 600MHz processor.

This law won't change, so the bigger, more expensive devices will always getter the new stuff first. It will probably be 5-10 years before the iPhone has the processing power equivalent to the current top of the line computer, but at that point the top of the line will have 5-10 years more advanced technology.

There is also the power in the form factor. A tablet is, for all intents and purposes, a hand held device. Therefore it needs to be small and light. You're not going to see a tablet get much bigger than 10-12", before it gets too big and too heavy to bother with (much like a 20" laptop). For many things a bigger screen is essential. For any sort of pro media editing, the bigger the screen the better. For programming, the bigger the screen the better. And not only that, more screens can be better too. And this isn't just a geek thing, many people who have these setups aren't tech savvy, they just need them. You are never going to have a 27" iPad with support for 2+ displays. At that point it isn't a tablet and more of a paving slab.

Ultimately, a PC will always have faster processors, more storage, more RAM and more form factor freedom than a tablet, in the same way tablets will almost always be the same to smartphones.


Accuracy is important for many lines of work. Not only how accurate you can be but how often you can be that accurate. You are always going to be more accurate on a hardware keyboard due to the tactile feedback (at least until we get tactile feedback on multitouch screens). The mouse is always going to be more accurate because of the disconnect from the screen meaning you can see exactly where it will act.

For tasks that require accuracy, this is a killer. The only way for a tablet to get the level of accuracy would be to either use a stylus or a mouse, neither of which are ideal and would hamper the tablet.

Cars and Trucks

At the D8 conference Steve Jobs said the following:

When we were an agrarian nation, all cars were trucks. But as people moved more towards urban centres, people started to get into cars. I think PCs are going to be like trucks. Less people will need them. And this is going to make some people uneasy.
Steve Jobs

This really is the perfect analogy. Cars are ideal for almost all everyday tasks. You can go to work, go shopping, take the kids to school etc. Trucks, vans, lorries etc. are still needed though. They are used by workmen to hold their tools, by companies to transport goods and by everyday people for handling tasks like moving furniture.

Trucks are heavy duty machines, ideal for the tasks they are used for. Cars are for your everyday tasks. PCs and tablets will be the same. Despite all the things mentioned above, tablets are still better for 70% of computing tasks. They will become the more dominant form of computing, which most people use for browsing the web, talking to friends, sending email, playing a game etc. Of course this will depend on various things. With the iPad it needs both the multitasking in iPhone OS 4.0, and a mail client that isn't pretty much useless.

PCs will stick around. Fewer people will need them. I expect tablets will wipe out the sub-$900 PC market in the long run. PCs will become higher end items, for those who need them for extensive work or for specialised tasks. I doubt Apple will be hit too much by this. The Mac Mini will probably die, as will the lower end laptops. Companies like HP and Dell may temporarily shrink, depending on whether their tablets take up the slack of the fall in low end PC sales. Ultimately though their profits should increase as they stop selling as many products with razor thin margins. I think if HP pulls off the Slate with WebOS it could quite easily retain its role as the dominant player in the computer industry.

I have no doubt that tablets are the future, I'm just incredibly sceptical that PCs are the past.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.

Welcome To My World tag:pilky.me,2010:view/1 2010-05-29T00:00:00Z 2010-05-29T00:00:00Z Martin Pilkington pilky@mcubedsw.com Welcome to my new personal blog. I've decided that I need a place to write my thoughts down, rather than spewing them over twitter or IM or other more realtime forms of communication where my hands type faster than my brain works.

And before you say anything, I know that there is no X, Y or Z. I'm running on a custom blogging app written using Django. I thought that this would be as good a project as any to learn Django with. There are some things that I haven't got round to yet (RSS feeds), some things I don't need quite yet (pagination) and some things I'm just not adding (comments).

Keep an eye out for new posts as I plan to write a lot of stuff up on here very soon and possibly transfer some posts from my old, now-defunct personal blog. Hopefully it will be interesting.

If you would like to send me a comment on this post, send an email to pilky@mcubedsw.com. Alternatively send a tweet to @pilky.