Tales From An Unchecked Mind

A Blog By Daniel Kennett

Compile-Time NSLocalizedString Key Checking

There are two typical flows for using NSLocalizedString to localise your application’s strings:

  1. Type the “base” string into your source directly, then use genstrings to generate strings files. In your .m files, your calls look like this: NSLocalizedString(@"Please enter your name:", @"String used to ask user for their name");

  2. Type a unique key into your source, then add the string for that key straight to the strings file. In your .m files, your calls look like this: NSLocalizedString(@"CBLUsernameFieldLabel", nil);

There are various merits to both approaches, and I’m not here to argue which one is best. In my world, at work we use approach #2 because our localisation process kinda requires it, and I use #2 in personal projects because, well, seeing user-facing language in .m files gives me the heebie-jeebies — those files are for computer language, not people language.

This post is mainly for people who use approach #2.

Whither Error Checking?

At least once in your life, you’ll have seen something like this in your projects:

Even worse, if you’re using approach #1, you might not notice the missing localisation until you’ve shipped your product and start getting complaints from customers that half your app is in English and the other half is in Spanish.

The problem is that there’s no compile-time checking of strings files, and while there’s a few debug tools that you can use to spot un-localised strings, in the real world these won’t be run nearly as often as they should.

After extensive research (ten minutes of Googling) and a quick poll of Twitter (which resulted in one suggestion involving grep, and an argument) I couldn’t really find anything like this.

If You Want a Job Doing…

I ended up writing a little tool that takes a .strings file as an input and outputs a header file containing NSString constants for each key in that file. It turns this:

…into this:

Now we have compile-time checking that my keys are present and correct, and we get autocomplete for free. Much better!

The tool is very simple, and is 80% error checking. It reads the keys in using NSPropertyListSerialization and writes the found keys out to a header file. You can see the source over on GitHub.

Putting It All Together

To integrate this into your project, there are three steps:

  1. Generating the header files when your project builds.
  2. Telling Xcode where to find the generated files at build time.
  3. Importing the generate header files so you can use them.

First, you want to create a custom build step in Xcode before the Compile Sources build step to generate header files from your strings files. You could be less lazy than me and create a custom build rule to automatically do this to all your strings files, but I’m lazy. My custom build step looks like this:

1
2
3
"$PROJECT_DIR/Vendor/generate-string-symbols/generate-string-symbols"
    -strings "$PROJECT_DIR/Cascable/Base.lproj/GeneralUI.strings"
    -out "$BUILT_PRODUCTS_DIR/include/GeneralUI.h"

It uses /bin/sh as its shell, and I have the generate-string-symbols binary in the Vendor/generate-string-symbols directory of my project. It places the generated header file in the include directory of the build directory.

Next, you need to tell Xcode where to search for your files. Make sure your project’s Header Search Paths setting contains $BUILD_PRODUCTS_DIR/include.

At this point, you can start using the symbols in your project. However, you’ll need to #import your generated header files(s) in each file you want to use localised strings in.

To get around this, can #import them in your project’s prefix header file.

In my project, I have a “convenience header” which imports the generated files and provides a couple of helper macros to make localisation a little nicer, especially considering I use non-default string table names.

CBLLocalizedString.h
1
2
3
4
5
#import <Foundation/Foundation.h>
#import <GeneralUI.h> // Generated from strings file

#define CBLLocalizedString(x) NSLocalizedStringFromTable(x, @"GeneralUI", @"")
#define CBLLocalizedStringWithFormat(x, ...) [NSString stringWithFormat:CBLLocalizedString(x), __VA_ARGS__]

…and you’re done! You can find the generate-string-symbols project over on GitHub under a BSD license. Enjoy!

Photo Challenge: Time Warp

Over a year ago, I “reviewed” my Canon 6D. I love that camera, and it can take beautiful photos. However, it’s big, especially with the battery grip attached.

This means that I only take it with my when I actively want to take photos, which doesn’t include, well, 95% of my life. Sure, I always have my iPhone with me, which has a great camera for a phone… and until that suffix — for a phone — is gone, it’s not going to cut it as something to record photos that need to last a lifetime. It’s great at photos of landscapes or people where everything is roughly the same brightness, but as soon as you show it something challenging such as a scene with bright areas and dark shadows, the poor dynamic range sticks out like a sore thumb. As a friend put it: “So, let’s summarise: Your device which has no other purpose than to be a camera, is better at being a camera than your phone which is many devices. Who knew?!”

Anyway, my search for a small but high-quality camera led me to Micro Four-Thirds (MFT) cameras, and on to the snappily-titled Olympus OM-D E-M10. I’ve always had a soft spot for Olympus cameras (I still have an Olympus OM-1 which was given to me as a child by my father), and after a bit of deliberation I picked one up.

Once I got it home, I realised just how similar it was to my 1970’s-era OM-1!

Occasionally I pick up that camera and wish I could take pictures with it again. I have fond memories of using that OM-1 when I was younger and the excitement of dropping off my roll of film at the local chemist (followed often by the disappointment of all my photos being terrible). Perhaps it’s all nostalgia, but it’d be fun to take photos like that again.

So, I present to you…

The “Time Warp” Photo Challenge

The idea is to take photos using only the features available to you on a 70’s era SLR like my OM-1. Here’s the OM-1’s feature list:

  • Manually adjustable shutter speed (dial on camera)
  • Manually adjustable aperture (dial on lens)
  • Manually adjustable focus (ring on lens)
  • Adjustable ISO (replace film with film of the desired ISO)
  • Clockwork self-timer
  • Photo capacity of 36 shots with the right film
  • Light meter (literally the only electrically powered thing on the camera)

So, here are the rules of the challenge:

  1. You’re only allowed to take 36 shots.
  2. Your first viewing of the photos must be back at home after you’re done. You’re not allowed to look at them on the camera screen, or delete them.
  3. You have to choose your ISO value at the beginning of each set of 36 photos.
  4. Manual focus only.
  5. Manual exposure controls (shutter speed, aperture value) only.

If you really want to commit, you can modify #2 to be “Your first viewing of the photos must be after getting them printed at a photo store”, but I want this to be fun rather than a complete loss of modern technology.

With some setup, it’s actually pretty easy to simulate these limitations on a modern camera — otherwise it’d be way too easy to “accidentally” steal a glance at the screen after taking a photo, or to go over the photo limit:

  • For #1 I found the smallest memory card I could (2Gb) and filled it up with a big empty file to enforce the 36 photo limit.

  • For #2 I disabled “Photo Review” on my camera, so it doesn’t automatically display the photo on the screen just after it’s taken.

  • #4 was enforced for me by virtue of my old lens.

  • #5 was partly enforced by my old lens, but keeping my camera in M mode isn’t too hard.

The equipment I ended up using:

  • My Olympus OM-D E-M10
  • My old Olympus F-Zuiko 50mm f/1.8 lens
  • A Novoflex MFT-OM lens adapter
  • A 2Gb SD card I found in an old point-and-shoot camera, filled up with an empty 1.4Gb file to only allow 36 photos to be taken

My Results

Note: All photos after this point were taken under the constraints of the challenge, using the equipment mentioned above. Only minimal editing has been done (cropping, slight contrast boost, etc).

I have to admit, I wasn’t optimistic at first. I tried manually focusing with the new lens that came with my OM-D and had a really hard time. The lens is focus-by-wire and very sensitive — I had trouble finely adjusting focus at wider apertures. I tried with some of the lenses on my Canon 6D which weren’t focus-by-wire, but the manual focus ring was still too sensitive for my liking.

Then, I remembered about my old manual lenses for the OM-1 I had in a bag somewhere in the basement. Maybe they’d be better! A decent adapter to put them on my OM-D is an incredibly expensive item considering “it’s just a lump of metal” but I gritted my teeth and made the purchase…

…and what a transformation occurred!

It’s an incredible cliché to say “They don’t make them like they used to!” by holy crap, this… thing, this thirty year old, dusty, dented thing absolutely transformed my dinky little camera into something that felt meaningful. The hyper-sensitive and lifeless manual focus on my modern lens was replaced with a mechanical focus ring that feels beautiful to the touch, gliding purposefully but smoothly under my fingers. Suddenly, focusing manually was a wonderful experience again.

Suddenly, I was excited about this challenge. Time to go take some pictures!

Now, I’ve never been “that guy” — you know, the one that has his camera on high-speed continuous shooting and takes at least six shots of everything he wants to take a picture of, but I’m completely used to having a 32Gb memory card in my camera, allowing for over 1,000 shots before I need to start worrying about free space.

However, having a limitation of 36 photos transforms the experience, especially when combined with the fact I can’t delete or look at the photos I take. Suddenly, the shutter click of the camera becomes the solid thunk of a letterpress machine, a mechanical and meaningful action that you’re acutely aware of. Every photo becomes important.

In the few hours I spent doing this challenge the other day, during an afternoon out driving through a nature reserve to Nyköping, I ended up skipping a ton of photos that I’d normally have taken because they just didn’t feel right.

And you know what? I was loving every minute of it. “I can’t wait to get back home and look at these pictures,” my wife said after I took a photo of her and our dog, Chester. “Don’t look at them before I’m ready!”

Back Home

When getting back home, my wife and I excitedly loaded up Lightroom and imported the pictures. Every single one got a response from at least one of us. Not a single photo was wasted, and apart from a couple where the focus was so bad it ruined the photo, not a single one was deleted.

My computer has over 6Tb of local storage available to it right now, with infinitely more available in “The Cloud”. My $20 memory card allows my camera to take over 1,000 photos at once. And you know what? It’s too much. I have many albums in my digital photo library of trips and holidays I’ve been on that contain hundreds upon hundreds of photos. 99% of them are useless, drowned in a sea of pictures of crap I no longer care about. Sure, that lamppost might’ve looked cool at the time, but three years down the line? Its meaning is completely lost by all the other lampposts and trees and street corners in that album.

I then noticed that all of my favourite photo albums only have a few photos in them due to constraints at the time. The WWDC Photo Walk album, for instance, or the time I went to San Francisco for two days and had a couple of hours to kill before the event started.

Conclusion

I absolutely adored this challenge and will be doing it again, repeatedly, in the future. The combination of a good manual lens and the rest of the constraints added really reconnected me with photography, and by extension the world around me and how I look at it through a viewfinder.

A more startling realisation though, is that even when not doing this challenge I should be taking less photos, not more. This is going to be a hard balance to achieve, I think, since the proliferation of cheap storage and the desire not to miss a potentially good photo go really well together.

With this in mind, I went to a photo store in Stockholm yesterday and asked if they had any SD cards that were one gigabyte or smaller. I may as well have asked for a floppy disk.

You can follow along with my Time Warp challenge over in my Time Warp set on 500px.

2013 Mac Pro Review: Developer Edition

On June 10th 2013, I was sitting in the upper floor of Moscone West in San Francisco, along with a few thousand fellow iOS and Mac developers.

Roused by a somehow stirring video of nothing but words and dots, I sat back and let the RDF wash over me. iOS 7 is coming, and it’s going to be awesome!

The keynote started as normal. Retail stores. Shipped a quintillion iPhones. Yawn. Craig Federighi started demoing 10.9 Mavericks, which was pretty interesting, but the keynote was mainly for demoing consumer features. I noted a few things that looked particularly interesting and started getting excited about the new APIs that would be coming in Mac OS X.

Then, Phil Schiller came onstage. I’ve had a soft spot for Schiller ever since he pretended to be on a roller coaster while demoing iChat video effects during a keynote years ago, and I always mentally shout “PHIL!!!” when he comes onstage, but I don’t really know why. Phil started talking about minor bumps to Macbook Airs. Zzzzzz.

Wait, what’s this? Sneak peek of new Mac hardware? Finally, the new Mac Pro! Everyone cheers. I cheer. I feel a little bit sad that now the iMacs are powerful enough for everything I need I can’t justify the expense of a Mac Pro any more, but I’m excited. “We’re going to go a little over the top…” says Phil.

My ribcage starts to rumble. The bass ramping up throughout the room as the video starts was more of a feeling than a sound. Angles start flashing across the screen and not much else. Sigh. Another non-announcement.

Wait. It’s round. If it’s round it has to be…

Oh no. No no no no no.

It’s small.

“Can’t innovate any more, my ass!” Phil quipped, a smile on this face giving a way a sense of genuine pride. In seconds the non-annoucment had turned into a full-on discussion of the new machine.

Phil started talking about the design. He got to “Unified Thermal Core” and I was gone. They’ve made a spiritual successor to the G4 Cube! Phil started reeling off numbers I didn’t care about as I worked myself into a tizzy.

You see, I have a special bond with the G4 Cube. It was my first “real” Mac, and my mother bought it for me when I was a kid. I admired the beauty of the engineering of that machine. I learned to program on that machine. I cycled six miles to my local Mac reseller the day Mac OS X 10.0 came out and excitedly cycled home in the rain to install it on that machine. I’ve had many Macs since, but none had the soul and beauty of the G4 Cube. Coupled with a pile of nostalgic memories, I loved that machine so much I still have one to this day.

Generation Gap

Well, that was it. There is no way I couldn’t have one. I let my fiancée know the bad news, then tweeted what I thought was a fun but stupid screenshot of our conversation in what turned out to be my most popular tweet of all time.

Don’t worry — we still got married.

Now, time to retroactively justify spending so much money on a computer!

The Review

Disclaimer: This review is about day-to-day usage of a 2013 Mac Pro from the point of view of a Mac OS X and iOS developer, using tools like Xcode to get work done. The benchmarks here are only useful when compared to other benchmarks in this post, and my main focus is the overall “feel” of the machine compared to other modern Mac equipment rather than raw numbers. For detailed, millisecond-precise benchmarks I can highly recommend Ars Technica’s review of this machine.

What I’m Working With

Let’s begin by discussing what I’m working with. First, the hardware. I’ll mainly be comparing the Mac Pro with my work-provided 15” Retina MacBook Pro since they’re the two machines I have access to, and my wife won’t let me fill up her iMac with my crap (which, to be fair, is probably a wise choice).

2013 Mac Pro2013 15” Retina MacBook Pro
CPU3.5 GHz 6-Core Intel Xeon E52.7GHz 4-Core Intel Core i7 “Ivy Bridge”
RAM32GB 1867 MHz DDR3 ECC16GB 1600 MHz DDR3
GraphicsDual FirePro D700 6GBNVIDIA GeForce GT 650M 1GB
Storage1TB PCI-E SSD256GB PCI-E SSD
Specced Price$5,799 / £4,739 / €5,199 (DE/FR)$2,799 / £2,399 / €2,799 (DE/FR)

As for coding projects, I deal with a number of projects both at work and personally as side projects. Of these, I’ve chosen two of them to use in this review — for simplicity’s sake, I’ll call them Large Project and Small Project.

I’ve chosen these projects as I feel they reflect two common use cases — Large Project is a product typical of a cross-platform company with many people working on components of the same product, and Small Project is a typical small app that a small development shop or single developer might produce in a couple of months.

To reiterate my disclaimer above, I’m not going to go into detail about the exact number of lines of code, partly because of sensitivity concerns as I’m talking about a commercial application, and partly because it doesn’t really matter. However, to give you an idea of the size of the projects:

Small ProjectLarge Project
Derived Data*150MB3.98GB
Debug Binary Size**2MB105MB
No. of Source Files45 Obj-C .m, 30 C++ .cppI have no idea. A lot.
Benchmarked SDK & ArchitectureMac OS X 10.9 (x86_64)iOS 7.1 Simulator (i386)

* A project’s “Derived Data” is a collection of files generated by Xcode while indexing and building a project. It contains object files, indexes, build logs and various other files that allow Xcode to cache unchanged parts of a project for faster incremental building. The size was measured by deleting the project’s existing Derived Data folder, opening the project and doing a single debug build for a single architecture, then waiting for Xcode’s indexing process to complete.

** Debug binary only, no resources, for a single architecture.

The Small Project is a small Objective-C Mac app that contains 3 targets, all of which are dependencies of the main application and are built as part of a normal build. It contains some C++ code in the form of a third-party open source library, and has a nice and simple build process — open Xcode, push build.

The Large Project is a large iOS app that contains over 100 targets, most of which are dependencies of the main application and are built as part of a normal build. Some targets are heavily or completely C++ based, and the project has a very complex build process involving a wide variety of tools and build scripts in various languages.

Benchmarks

Alright, let’s get down to some benchmarking!

Build all the things! Activity Monitor wasn’t running during benchmark runs, but typical day-to-day apps were (email, Safari, Twitter, etc) to reflect a “normal” environment.

Since my Mac Pro has 32GB of RAM, I also benchmarked building the projects while using a RAM disk for Xcode’s Derived Data folder. I didn’t do this on the MacBook as 16GB isn’t enough to do this with the Large Project.

Sidebar: If you’re a developer reading this, I made a little command-line tool that simplifies the process of creating a RAM disk, designed to be friendly to being run at startup. You can find it over on my GitHub pages.

The builds had some Xcode debug build time optimisations applied as described over here, and are all debug builds for a single architecture.

Small ProjectLarge Project
MacBook Pro9 seconds6 minutes, 2 seconds
Mac Pro (SSD)6 seconds3 minutes, 58 seconds
Mac Pro (RAM Disk)5 seconds3 minutes, 40 seconds

As you can see, the Mac Pro builds projects around a third faster than my MacBook, which, in itself, isn’t all that surprising. With the Derived Data folder placed on a RAM disk, the Mac Pro is 40% faster than the MacBook.

One nice thing to note is that while doing these benchmarks, I had all six cores of the machine pegged at 100% for towards an hour. During that time, the fans of the Mac Pro barely made a whisper — a welcome change from the screaming fans of the MacBook.

A Note On Release Builds

I considered doing benchmarks using release builds as they’ll be slower as optimisation is CPU-intensive, and if you’re building for multiple architectures build time will almost increase linearly (around twice as long for two architectures, three times as long for three, etc). As a rough guide, a typical release build for an iOS app that supports both arm64 (iPhone 5S and iPad Air) and armv7 (everything else at the time of writing) will take roughly 2.5x as long as a single-architecture debug build.

However, this review is focusing on a developer’s day-to-day workflow rather than build server duties. However, I did do a couple of release builds of the Large Project, and you can expect speedup to be similar to that of debug builds.

Day-To-Day Workflow in Xcode

This is where things get interesting. Clean builds only tell a small portion of the story — day-to-day, clean builds are somewhat of a rarity. Instead, we make many small incremental builds as we write some code, make sure it builds, then test the changes out by running the application or running unit tests.

My MacBook is my daily work machine, and we’ve been at each other’s side for a year or so now. I know it in and out, and until I started working on the Large Project with my Mac Pro, it felt fine.

A typical small task I might do involves finding a file, opening it up and finding the method I need to work on. Then, I’ll need to quickly look up the API reference of something I need to add to that method, then write the new code and test it.

It goes like this:

  • Command-Shift-O to open “Open Quickly”.
  • Start typing the class name HTTPIma….
  • When the file comes up in the list, press Return to open it.
  • Navigate to the method I need.
  • Declare an instance of the new API I need to use: NSURLConnection *connection;.
  • Command-Option-Click the NSURLConnection name to open its header in the Assistant Editor.
  • Read the documentation and amend my code accordingly.
  • Close the Assistant Editor.
  • Run the application and test the new code.

Xcode’s “Open Quickly” panel

After a week using the Mac Pro and doing this regularly, I tried it again on my MacBook.

  • Command-Shift-O to open “Open Quickly”.
  • Start typing the class name HTTPIma….
  • When the file comes up in the list, press Return to open it.
  • Open Quickly is still processing my typing, so by the time the Return registers, a different file is selected.
  • Open the wrong file. Grumble.
  • Repeat, this time waiting until Open Quickly has settled down.
  • Navigate to the method I need.
  • Declare an instance of the new API I need to use: NSURLConnection *connection;.
  • Command-Option-Click the NSURLConnection name to open its header in the Assistant Editor.
  • Beachball.
  • 5 seconds later, the Assistant Editor appears.
  • Read the documentation and amend my code accordingly.
  • Close the Assistant Editor.
  • Beachball.
  • 5 seconds later, the Assistant Editor disappears.
  • Run the application and test the new code.

My MacBook can’t possibly be this bad, can it? After working on the MacBook for a few hours, I got used to it again and realised that it didn’t seem slow before because I’d briefly do something else while waiting for Xcode to catch up — glance at Twitter, take a sip of my drink, etc.

My whole Xcode experience is like this on my MacBook with the Large Project. Getting to the Build Settings pane from a source editor takes a good few seconds as it takes time to bring up each new panel as you navigate there. After a year of nothing else I’d gotten so used to it I didn’t even notice it any more.

I’ve found this week with my Mac Pro to be far more productive than working with my MacBook. It may partly be due to the fact I’m also working from home, away from the distractions and annoyances of the office, but the fact I don’t have time to glance at Twitter or sip my drink as I navigate around certainly helps keep my concentration sharp.

It’s important to note that only the Large Project makes my MacBook behave this way. Working on smaller projects, including work projects much larger than the Small Project I talk about here, the Xcode experience is as fast and painless as it is on the Mac Pro.

Day-To-Day Workflow in Other Development Tasks

I don’t really use huge datasets in anything other than Xcode, so nothing surprising here. grep is noticeably faster on the awesome SSD, as is switching around branches.

One thing that is nice is the ability to run several virtual machines at once without having to care about system resources. This is particularly handy when testing features that involve remote controlling — I can have multiple iOS Simulators running at once without problems.

Conclusion

If you have a reasonably modern, well-specced machine and are bumping into its limits, the Mac Pro gives a surprising amount of extra freedom that I didn’t expect. My MacBook isn’t a bad machine at all, and I just assumed the large project I work with would bring Xcode to its knees on anything. I’ve felt a genuinely large improvement to my day-to-day productivity on the Mac Pro, to the point where working on my MacBook feels clunky and annoying.

If your current workload is bringing your computer to a grinding halt, you might find the Mac Pro gives a refreshing freedom to your day-to-day workflow. If that’s the case, I’d really recommend it.

Otherwise, I’d really struggle to justify the cost purely on a development basis and have a really hard time imagining an indie developer or small development shop generating a project large enough to see the benefits — especially since the newer iMacs and MacBook Pros are excellent development machines and give a great price-performance ratio.

In short, if you have a modern machine and aren’t already thinking “I really need something more powerful than this”, the Mac Pro is really hard to justify.

Unless you loved the G4 Cube — then you should buy one anyway, because the G4 Cube was awesome.

Perhaps I should get that printed on a mug.

Xcode Bots: Common Problems and Workarounds

I love Continuous Integration. I’ve never bothered with it for my side projects since it didn’t seem worth the effort, but now it’s built into Xcode I thought “This is Apple! Two clicks and I’m done, right?”

Wrong.

I spent a few hours battling with the Xcode service on my Mac OS X Server, and below I detail workarounds for the problems I encountered getting my projects to build.

Don’t get me wrong — I love Bots, and I’m sure that in a few months these issues will be fixed as they’re getting tripped up by very common situations. Remember to file those Radars!

After a little work, my main side projects are continuously integrated. Hopefully this post will help you do it faster than I did!

Note: All these tips assume you’re using git. If not, well, you’re on your own!

Repositories that require an SSH key to access

If you host your repository on GitHub or similar, you’re likely to use an SSH key to access your code — especially if you have a private repository. GitHub will actually let you check out over https, but where’s the fun in that when you can follow this simple seven-step process?

1) If your SSH key requires a password, you need to make a password-less version of it using ssh-keygen -p (when it asks for a new password, just hit return).

2) In Xcode, choose Product → Create Bot…, name your Bot and untick Integrate Immediately and click Next.

3) In the next pane, choose Guest as your authentication method. This will succeed on your local machine since you have your keys installed. Continue through and create your Bot.

4) Open the Server application and find your new repository and edit it. Change the authentication method to SSH Key and the user name to the correct value (this is git for GitHub).

5) At this point, you’ll notice a new SSH key has been created for you. Either upload the public key to your host, or click Edit… to replace it with your own password-less keys.

6) Save your changes, return to Xcode and click “Integrate Now” in your new Bot.

7) Success!

8) File a Radar asking Apple to add the SSH Key authentication option to Xcode itself. Feel free to duplicate mine: #15184645: Bots: Can’t enter an SSH Key to authenticate a repository in Xcode.

It’s worth noting that you can make this process slightly simpler by manually adding the repository to the Xcode service in the Server app before creating your Bot. This way, you can set up your SSH keys right away. Make sure, however, that you get the URL of the repository to match the URL you have checked out on your local machine, otherwise Xcode won’t pick it up.

Code Signing and Provisioning Profiles

Ahh, Code Signing. It just got easy, and now there’s a new spanner in the works.

Bots run their builds in their own user. This means that even if they build fine when you log into your server, they may fail to build in the Bot. The tricky part here is making your private keys available to the Bot — downloading the certificate from Apple’s Developer Center doesn’t give you back the private key, so you’ll need to import it from a machine with a working build setup using Xcode’s Developer Profile Export/Import.

Once you can log into your server and build your project with signing, a simple way to allow Bots to access your signing certificates is to copy them from your user’s Keychain to the system Keychain.

Make sure you copy the private key by dragging it to the System Keychain. The certificate will be copied as well.

Once you have your certificates in the System Keychain, your Bots will be able to access them.

Adding your Developer Team to the Xcode service in the Server app should automatically deal with Provisioning Profiles. If not, put your profiles in /Library/Server/Xcode/Data/ProvisioningProfiles.

Submodules

Submodules work fine as long as you don’t have a detached HEAD. If they require SSH keys to access them, after creating your Bot you’ll need to go over to the Server app and manually set up the keys for each submodule.

If you’re not sure if you have a detached HEAD, you can check from the command line or Xcode. On the command line, run git status in each submodule. If you get something like # HEAD detached at 89970d0, you’re in a detached HEAD. From Xcode, try to create a Bot. When it’s time to define authentication for each repository, Xcode will tell you the state of each submodule.

If any of your submodules read “(detached from…)”, they won’t work.

To fix this, you need to get back on a branch. The quickest, hackiest way to do this is to run git checkout -b my-awesome-branch in the submodule to make a new branch. Once that new branch is pushed and available on your remote, your Bot will be able to check it out correctly.

Note: I’ve noticed that Xcode 5.0.1 doesn’t seem to refresh branch information quickly. Try relaunching Xcode if things get weird.

Don’t forget to file a Radar! Again, feel free to duplicate mine: #15184702: Bots: Checkout failure in git projects that have submodules with a detached HEAD.

I made a mistake and now I can’t fix the repository from Xcode!

If you accidentally added a Bot with incorrect authentication details or some other mistake, deleting the Bot may not fix it. You also need to delete the repositories from the Xcode service in the Server app. When you do that, Xcode will offer to reconfigure them when you create a new Bot.

The Future

Despite these niggles, I really love Continuous Integration and Bots. I’ll try to update this post as I find more issues, and hopefully as the Bots feature matures and these issues go away. Follow me on Twitter to get update announcements.

Updates

  • October 9th, 2013: Added Radar bug numbers for SSH keys and submodules.

Summer Experiment: Hacking, Agile Style

I’ve been working at Spotify for two and half years now. Previously, I ran my own software company for a number of years. Previously to that, I was in full-time education from the age of… well, whatever age you start going to nursery school.

One big thing I’ve learned since being at Spotify is actually how to make a product. My Computer Science degree taught me how to write programs, and from that I went straight into shipping my own software. Honestly, now I’ve worked at Spotify for a while I realise what a miracle it was that we even shipped anything back then, let alone earn enough money to support multiple employees.

Taking a Break From Programming? Let’s Write a Program!

Like all good programmers I know, I enjoy working on the odd spare-time project now and then. What typically happens in my case is that I sit down, go “Oh, I know!” and start coding. What I normally end up with is a working application that is clearly written by an engineer — it works just fine, but looks like someone fired the AppKit Shotgun at the screen.

The most recent example of this is my new camera. After writing a blog post complaining about how it was stupid for having Facebook and Twitter built-in, I set about sort of reverse-ish engineering the protocol it uses so I could remote control it and download images from the comfort of my chair. The protocol is PTP over IP — PTP is a documented standard, and the “over IP” part is pretty standard too, which is why I hesitate to say I reverse-engineered the protocol. However, the proprietary extensions added by Canon are largely undocumented, which is where I’ve added new knowledge to the area.

After a couple of weekends with Wireshark and Xcode, I had an Objective-C framework with a reasonably complete implementation of the PTP/IP protocol — enough to get and set the various properties of the camera, stream the live view image, perform commands and interact with the filesystem.

A typical side-project: function, but no form.

After a few weeks of doing basically nothing on the project, I went to WWDC and saw Apple’s iOS 7 announcement. After years of being a “Mac and Frameworks” guy, I finally started getting excited about iOS UI programming again, and this camera project seemed like a great way to get back into making iOS apps and learning all the cool new stuff in iOS 7.

However, wouldn’t it be nice to actually make a product rather than an engineer’s demo project?

Something Something Buzzword Agile

At Spotify, we use Agile. I’m not a die-hard fan of Agile or anything (perhaps it’s just our implementation of it), but I do appreciate how it lets you be organised in how you get work done and how it gives you a picture of what’s left to do.

So, during my July vacation I set aside two weeks with the intention of making a little iPad app to remote control my camera. Rather than immediately creating a new project and jumping into code like I normally do, I decided to employ some techniques — some Agile, some plain common sense — to manage my progress.

Step 1: Mockups

First, I spent a couple of days in Photoshop making mockups of what I wanted to do. The logic behind this was to try and avoid the “blank canvas” feeling you get a few moments after creating a new project in Xcode. With a few mockups under my belt, I would hopefully be able to dive right in without wasting time implementing a UI that I’d have easily seen was unworkable if I’d simply have drawn a picture of it. Indeed, my first idea turned out to be unusable because I’d assumed the camera’s image would fill the iPad’s screen:

Originally, the app’s controls would flank the image vertically, hiding and showing with a tap.

However, when I actually imported an image from my camera it was obvious that layout wouldn’t work. I then spent a little while figuring out how to best deal with an image with a different aspect ratio to the iPad’s screen.

A few mockups refining the main UI. The challenge here is that the aspect ratio of the image coming from the camera isn’t the same as that of the iPad.

Step 2: Insults

When working on anything, having an open feedback loop with your peers is essential. With a project that needs to be done in two weeks, that loop needs to be fast and efficient. Unfortunately, I learned rather quickly at Spotify that in a normal working environment this is completely impossible — apparently, “That idea is fucking stupid and you should be ashamed for thinking otherwise” is not appropriate feedback. Instead, we have to have multi-hour meetings to discuss the merits of everything without being assholes to one another. Coming from the North of England, this came as a huge shock to the system — being assholes to one another is pretty much the only means of communication up there.

For this project, I enlisted the help of Tim (pictured above, enjoying a game of cup-and-ball). Tim and I are great friends, and as such hurl abuse at one another with abandon — exactly the traits required for an efficient feedback loop. Indeed, throughout the project he’d belittle and insult my bad ideas with such ruthless efficiency that I never wasted more than an hour or so on a bad idea.

This is basically the “rubber ducking” theory, except that the duck is a potty-mouthed asshole who isn’t afraid of hurting your feelings.

Step 3: Planning and Monitoring Tasks

This is the typical Agile stuff — I created a note for each task I needed to do in order to have the application I wanted and placed them on a board with Waiting, Doing and Done columns on it. On each note was an estimate on how long I thought that task would take in days, with a margin of error on tasks I wasn’t that sure about — mainly those that involved more protocol reverse-engineering with Wireshark, since I hadn’t figured out the advanced focusing and auto exposure parts in the protocol yet.

Once I’d finished my board, I had between fifteen and twenty-five days worth of work to do in ten days. Obviously that wasn’t going to happen, but it was encouraging that everything that looked like it wouldn’t make the cut was an advanced feature rather than core functionality.

Step 4: Programming!

Finally, after three days of mocking and planning, I pushed “New Project” in Xcode and started coding. This seemed like a lot of honest-to-goodness work for what is supposed to be a fun side-project!

Two Weeks Later…

As a bit of a side note: It’s been a long time since I wrote a “proper” application for iOS (my last complete iOS app ran on the original iPhone), and I did everything the new way: it’s all Auto Layout with roughly half of the constraints done in Interface Builder and the rest in code, and there isn’t a drawRect: in sight. I had a lot of fun learning about the new stuff in iOS 7!

But, the golden question is… was adding three days of overhead to plan out what is ostensibly a throwaway side project worth it? Without it, I’d have had thirteen days to code instead of ten, and as a programmer I enjoy coding a lot more than I do planning and drawing boxes in Photoshop.

The answer, much to my surprise, is an unreserved YES.

Progress

I greatly enjoyed the sense of progress moving notes across the board gave, especially when a task took me less time to implement than I’d estimated. It also gave me goals to work towards — having a task I was really looking forward to implementing on the board made working on the boring notes to get there go much faster.

The Board is the Truth

Those little sticky notes really stop you from cutting corners, and cutting corners is what differentiates a hacky side-project from a polished product. For example, one of the most challenging things I encountered in the project was decoding the proprietary autofocus information the camera gives over PTP/IP. There are two main modes, one of which involves the user moving a single box around the screen and having the camera autofocus in that, the other of which user choses from a set of fixed points that correspond to dedicated autofocus sensors in the camera.

The “single box” method was simpler to implement, and I implemented both the protocol work and the UI for it in a day. At this point I was tempted to move on to something else — I mean, you could control focusing now, right? — and without that sticky note on my board I would have done so. After a bit of convincing by Tim, I just couldn’t bring myself to lie to my board and I spent two days implementing the other autofocus method. I’m really glad I did, because I had a ton of fun and ended up with a much more polished product.

Contrast-detect autofocus was fairly simple to implement, as it’s the same on every camera — a single rectangle is defined, which can be moved around the image to define the autofocus area.

Phase-detect autofocus was much harder to implement, mainly due to the focusing point layout — it’s different on every camera. My camera only has nine points, but high-end cameras can have many, many more. This means parsing the autofocus info from the camera properly, as it’ll have different data in it depending on which camera is used.

Statistics

By the end of the two weeks, my application was in a state in which I could use it to take a photo of the board!

Cheesy action-shot…

…of me taking this photo of the board.

As you can see, I added 1.5 days worth of work during the week and none of the advanced features got implemented, so all I have is a relatively basic remote control. However, this remote control is much, much more polished than it would have been if I hadn’t planned it all out in advance, and I’m much more pleased with what I ended up with vs. an unpolished project with half-working advanced features.

The tasks that got completed are as follows:

TaskEstimated Actual Date Completed
Live View Image 0.5 0.5 23 July 2013
Connect/Disconnect UI2 1.5 23 July 2013
Grid0.5 0.5 23 July 2013
Histogram1 1 24 July 2013
Half/Full Shutter UI1 1 25 July 2013
Property Controls2 1 29 July 2013
Metering2 ± 1 1 30 July 2013
AE Mode Display0.5 0.5 30 July 2013
Exp. Compensation0.5 0.5 30 July 2013
Live View Focusing2 ± 1 2 1 Aug 2013

I did twelve estimated days worth of work in nine actual days, and I either estimated tasks correctly or overestimated how long they’d take. The three tasks I did added up to one and a half days, and they’re highlighted in the table above.

Conclusion

I actually like Agile in this setting a lot more than I do at work. I get to reap the benefits of organising my work without the tedium of the bureaucracy that you encounter in multiple-person teams of people you’re contractually obliged to be nice to. This really shows in my output — the app I’ve made is really going in the direction a real product might, and if I decide I’d like to put this on the App Store I can just pick it up and keep going without having to go back and fill in all the shortcuts I would’ve made in a typical side project.

Most importantly, though: I had a lot of fun doing this — in fact, more than I normally do when working on side projects in a concentrated manner like this. Having mockups to work from and a visualisation of your progress made this project an absolute blast for me.

The Sine of Life

Note: This post is a personal piece and discusses death. If you’re here for programming and monster trucks, you may want to skip this one.

Nearly three months ago I was sitting here in my chair, writing a blog post about some of the lessons I’d learned from my failed software company. It discussed how, despite having a pretty crappy time of it, I’d come out of the process a smarter and better person. “Things are now looking up!”, I wrote, and discussed how my late father’s entrepreneurial spirit had been passed on to me to the point where I was looking forward to maybe starting another business in the future — one that would surely do better than my first! At the very least, I’d be able to avoid the mistakes I’d made before.

I finished the post mid-afternoon and decided to sit on it for a day and re-read it in the morning to make sure it still made sense with a fresh pair of eyes. I was in a pretty reflective mood — this was the first time I’d really written about my Dad, his death in 1998, and his influence on my life even though he’d long passed away. I was happy — my financial troubles from the aftermath of the failed business were all but gone, I was settling down with my fiancée in a new country and we had nothing but an uncomplicated life to look forward to.

That evening, my Mother died.

Once again, I was reminded just how fragile an illusion the “uncomplicated life” really is. Somewhat terrified by the timing of everything, the blog post I wrote never saw the light of day.

Some Time Later…

Twelve weeks later, life is regaining a sort of twisted normality. Everything seems normal enough — I’m getting up, going to work, paying bills. It’s a little bit harder to gain motivation to do anything, and I’m constantly fighting a small, deep-down urge to run back to the UK and hole up in an old friend’s house and feel sorry for myself until he forcibly ejects me from the premises. However, considering that I’m now the root node of my future family, the urge to run back to a known safe place isn’t exactly shocking.

Thankfully, these minor niggles will be temporary. My Father died just when I was old enough to fully understand the magnitude of what happened, so this isn’t the first time I’ve faced my own mortality — something that invariably happens when a loved one passes. As easy it would be to fall into a descending spiral of pity right now, I find myself compelled to go the other way — to embrace that my time in this world will be short and strive for something amazing.

I’m 28 and I’ve already done quite a lot. In the past fifteen years I’ve formed a successful company and enjoyed the spoils of a great wage. I’ve lost that company and endured the defeat of near bankruptcy. I’ve moved countries. I’ve been through the devastation that is a parent dying — twice. I’ve fallen in love. I’ve lived the lonely single life.

What persists through this chaos is not the sorrow of lost parents, the crushing pressure that is £50,000 of debt after having lost your entire income stream, or the pain of a poorly managed company failure severing friendships. What persists are happy memories — the joy of flying to Australia for Christmas with my family and girlfriend and driving down the coast in a rented a convertible car, of celebrating a successful product launch with a friend by driving a pair of Aston Martins around a track all day, and of countless other experiences and memories I wouldn’t have otherwise had.

In life, the highs balance out the lows, much like a sine wave. Sure, some of my lows are my own fault, but that particular trainwreck had an incredible amount of highs to balance it out. I’ve picked myself up, dusted myself off and righted my wrongs — debts have been repaid, friendships repaired.

What I can’t do now is glide along in mediocrity — it’s time to ensure the upwards swing of my sine wave happens by grabbing life by the scruff of the neck and jumping in with both feet. That convertible car I want that’s entirely inappropriate for a country that spends four months a year buried under a metre of snow? Perhaps it’s time to place an order. Those app ideas that’ve been rolling around in the back of my head? Well, it might soon be time to give being my own boss another shot. What could go wrong?

Right now, I’m still feeling the effects of the low swing of the sine — there’s nothing like spending a week buried in paperwork that formalises “Both of my parents are dead, here’s a pile of figures” to the UK government to keep you in a dull mood.

Looking forward, though — looking up — I’m excited as hell.

Canon EOS 6D “Review”

I’ve been in to photography since my Dad (who was a journalist) gave me his Olympus OM-1 SLR when I was a kid, which was released in 1972 and was entirely manual — the battery was optional and only powered the light meter.

Alas, my childhood OM-1 has seen better days.

When I hit 19 or so, I got a part-time job at the now-defunct Jessops camera store, and enjoyed my first income stream and a staff discount, which accelerated my photography interest quite a bit. My next camera was the EOS 3000N, a more modern film SLR. After that, I went digital with the EOS 300D, then an original 5D, then a 7D and after a brief stint with a 60D, I’ve landed with the wonderful EOS 6D.

Now, I’m not actually going to review the camera per se — there are lots of photography review websites far more qualified to do that than me. However, I do absolutely adore this thing — its image quality is superb and it’s beautifully light and easy to carry around. The incredible silent drive mode combined with the amazingly tiny 40mm f/2.8 STM pancake lens allows me to wander around the city and take photos without a bunch of people staring at me — I feel discreet with this setup, which is something I haven’t felt in years.

“I’m not fat, I’m just full-frame!”

However, this camera is the first SLR I’ve purchased that has a built-in shelf life on some of its features, which makes me slightly uncomfortable.

The March of Technology

I have a sort of love-hate relationship with technology, specifically computing technology. I’m a programmer, so obviously computing technology and I are intertwined, but I’m a firm believer that if the user is conscious that their machine has computers in it, the designers failed unless the product is specifically advertised as THIS IS A COMPUTER.

Let me give you an example. I’ve been fortunate enough to own several different cars so far in my life, and by far my favourite is the Mazda RX-8. The RX-8 is designed to be a mechanically brilliant car, and excels in this area — you can hurl it around the track like a madman and it’ll just lap it up and egg you on. When you’re driving, it’s all mechanical — a stubby gearstick sticks up directly from the gearbox, the steering has a very direct connection to the road, and on the dashboard you have a thing that tells you how fast the engine is going, another thing that tells you how fast the car is going, and not much else.

The RX-8’s dashboard.

Underneath this is an incredible amount of computing power trying to make sure I don’t slam face-first into the nearest wall — computers making sure the engine is mixing fuel correctly, computers to make sure the car goes in the direction I tell it to, and so on. However, back in the cockpit all you’re doing is giggling wildly as this glorious heap of metal seems to defy the laws of physics as it propels you around the twisty piece of Tarmac that is whatever racetrack you’re on, and there’s absolutely no indication in the car that there’s all this computing going on behind the scenes unless something goes wrong.

And that’s the way it should be.

Isn’t this supposed to be about a camera?

So, how is this relevant to the EOS 6D? Well, to me, cameras are like cars (and washing machines, toasters, fridges, etc) — they’re appliances designed to do a certain job. Your relationship with them is physical, and all of the control is done through the manipulation of buttons, switches and levers, not touch-screens or keyboards and mice. Sure, they’re computers on the inside, but if I’m conscious of that then something has gone wrong somewhere.

The 6D’s chassis (photo credit: Canon).

This is a physical device which I use to make photographs. Sure, it has a computer inside it, but that’s a detail you don’t care about — the specifications talk about the image quality and how well the camera stands up to rain, not gigahertz and RAM. In ten years, I should still be able to use it to take pictures and as long as I can get them out of the camera and into my computer, I’m set. The computing technology in this thing has one job — to help the user create the best photos they can.

It makes me slightly uncomfortable to switch on my camera to see this:

I’m not actually 100% sure why this feature irks me so much. The WiFi feature of the camera is incredibly useful — I can connect it to my computer or phone and shoot and preview shots wirelessly. However, seeing Facebook and Twitter icons on this thing crosses the line from an appliance to a computer. Before, my camera longevity concerns were all physical — how long will the shutter last? What happens if I get it wet? What if I drop it?

Now, I get to think about stuff other than my camera when thinking about my camera. Twitter are notorious about being strict with their API — what if it changes or goes away? What if Facebook goes the way of every other social network before it? That’s fine on a “big” computer — I’m already conditioned to have to update and change the software set on it, but my camera? It’s a metal box with buttons and switches on it, and I shouldn’t have to deal with “computer crap” like this.

Conclusion

Well, the EOS 6D is a great camera and you should all go buy one. However, seeing features like Facebook and Twitter integration in it make me worry about a future filled with appliances whose feature sets have shelf lives. It’s not just this camera, though — the whole industry is going this way, even cars. The Tesla Model S has Google Maps built right into the dashboard, for example.

My Olympus OM-1 is still exactly as functional as it was when it came out forty years ago. Will I be able to say that about my 6D forty years from now? How about if I bought a Model S? It seems that as technology advances, previously immutable appliances like cameras and cars are getting caught in the net of rapidly obsoleting technology.

Then again, maybe I’m just getting old. My first camera didn’t even require a battery, so obviously my idea of what a camera should be is biased towards a mechanical interface, and memories of mechanical cameras are going the way those of the floppy disk are.

It’s Alive, but Still Very Stupid

Well over a year ago, I blogged about starting a project in which I replace a radio-controlled car’s guts with an Arduino and have it be able to navigate to a given GPS location.

Well, that project is finally underway.

Hardware

It very quickly became apparent that an Arduino wouldn’t cut it for the kind of computational work I want to do, mainly because of the tiny amount of RAM it has. I ended up with a pairing of a Raspberry Pi and an Arduino Uno. The Arduino’s job is to interface with the various sensors on the car and pass that information back to the Pi, which has a lot more resources for doing computation.

Note: This chapter is a fairly quick overview of how the car is put together. You can find a shopping list with exact components at the end of this post.

Arduino

The Arduino has a prototype board attached to it, which on the underside has two three-pin connectors for connecting the car’s servos (one for speed, one for steering). The car’s speed controller is connected to the battery and provides the Arduino with power.

The top of the board (which is very messy — I intend to build a much neater one) hosts an accelerometer as well as a few cables for powering the Raspberry Pi, powering the Ultrasonic Sensors and reading data from the Ultrasonic sensors.

Black: Raspberry Pi power, Yellow and white: Ultrasonic sensor power, White four-lane: Ultrasonic sensor data, Raised red board: Accelerometer.

There are four Ultrasonic sensors mounted on the car’s body — three at the front and one at the rear. All the cabling for these sensors end up at a pair of connectors on the inside of the roof, which allows the body to easily be separated from the chassis when needed.

Leaving the body clear so you could see the electronics seemed like a good idea at the time, but instead it all looks messy and is really hard to photograph. Lesson learned!

The Arduino and prototype board are mounted inside an Arduino case that’s attached to the car’s chassis with zip ties. The case has a lid, but I’ve left it out of the photos to illustrate what goes there.

The vertical posts support the body, which rests on the clips. They can be raised to give more room.

Raspberry Pi

The Raspberry Pi is mated with a UI module from BitWizard, which hosts a 2x16 character LCD display, six buttons and a few breakout connectors for various serial busses.

Raspberry Pi with attached UI module. The garbled character to the right of the up arrow should be a down arrow, but there seems to be a bug in my custom character code!

The Raspberry Pi connects to the Arduino twice — once for power from the Arduino and once via USB to communicate with it. When it’s all assembled, it gets rather messy!

Thankfully, with the body on, it’s a lot cleaner. The final part is to find a housing for the Raspberry Pi and a place to mount it on the car itself.

Here’s a diagram of how everything fits together. Clear as mud!

Software

Arduino

The Arduino is running a very simple loop that polls the attached sensors and writes their values out to the serial port. ACCEL: lines are accelerometer readings in G, and DISTANCE: lines are ultrasonic sensor readings in cm.

1
2
3
4
5
6
7
8
9
10
11
12
ACCEL: 0.06,0.05,0.89
ACCEL: 0.07,0.05,0.90
ACCEL: 0.07,0.05,0.90
ACCEL: 0.06,0.05,0.90
ACCEL: 0.06,0.05,0.88
DISTANCE: 89,111,32,15
ACCEL: 0.07,0.05,0.89
ACCEL: 0.07,0.05,0.90
ACCEL: 0.07,0.04,0.90
ACCEL: 0.07,0.05,0.90
ACCEL: 0.07,0.06,0.90
DISTANCE: 89,111,32,15

Sample Arduino output.

In addition, the Arduino listens for input on the serial port for setting speed and steering values for the servos. This is not unlike the protocol used in my Arduino LED project, with two header bytes, a byte for the steering angle (0 – 180), a byte for the throttle angle (0 – 180) and a checksum byte.

Main Software Stack – Raspberry Pi

Everything so far is just enabling the main software stack of the car to observe and interact with the hardware in the car.

The main software stack is written in C# against the Mono Framework. I chose this setup because it’s pretty much the only nice Object-Oriented language available with a fully featured runtime available on multiple platforms (of course, there’s also Python and Java, but I prefer C# over those two). This setup allows me to write and debug the code on Mac OS X, then copy it over the the Raspberry Pi running Debian Linux for real-life use.

At the moment, the software stack is at the point where it’s a fully functional object model wrapping all of the implementation details of the car:

  • The Sensor class tree provides objects representing the various sensors on the car, providing events for when their readouts change.

  • The Servo class provides getters and setters for adjusting the servos on the car.

  • The SerialCarHardwareInterface class implements the ICarHardwareInterface, which defines various methods for getting the sensors and servos on the car. This is split out into an interface for when I need to implement a mock car for testing AI routines without risking damage to my car or other property (it goes quite fast!).

  • The CarHTTPServer class provides a simple REST API over HTTP to allow other computers on the network to observe the car’s sensors. This is great for writing tools to visualise the car’s status graphically.

RCSensorVisualizer showing the car’s accelerometer (top) and distance (bottom) sensor readouts graphically.

  • The CarEventLoop class runs a dual loop for running AI code. The first loop is a low-latency loop that monitors the car’s sensors, which can have interrupt handlers attached to it — simple classes that decide if execution should be halted, for example if the car turns upside-down. The second loop runs on a different thread and is where the main AI processes will take place. This dual setup allows the car to detect if it’s upside-down and halt operation even an AI process is taking a long time.

  • The I2CUIDevice class provides an interface to the screen mounted to the Raspberry Pi, allowing text to be written to the screen and firing events when buttons are pushed.

  • The MenuController class as friends provide logic for presenting a menu system on the display, allowing menu item selection and navigation, as well as “screens” for presenting information or prompting the user to confirm a chosen action.

Bringing It All Together

Below is a video showing the whole lot working together. I scroll through the menus on the Raspberry Pi and observe sensor readouts as well as adjusting the steering and throttle.

The project is now ready for its next steps, which will be writing AI code to have the car navigate its surroundings. It doesn’t have GPS at the moment so it’ll be limited to “drive forwards and don’t crash into stuff” for now, but it’s a start!

You can find the code for this project over on my GitHub. It includes the Arduino Sketch, the C# software stack for the Raspberry Pi and and Objective-C Mac application for observing the sensors.

Shopping List

The project at the moment uses the following hardware:

Your Runtime and You

There’ve been a few posts* floating around the internets recently discussing some rules about when to use Objective-C or not, mainly focusing on performance but touching on code readability and maintainability too. These posts are all written by intelligent people who make reasonable arguments, I get a rather uneasy feeling reading them.

For a little while I couldn’t place it, but today it came to me — these posts, to me at least, seem to be looking at the problem (or lack thereof) too closely, citing small code snippets and “rules” on how to fix them with the correct choice of language.

* Specifically, point 6 of that post.

Clarifying the Problem

Now, the discussion has become slightly muddled between two issues – one is that Objective-C’s runtime is slower than some other runtimes (like, say, C), and the other is of code efficiency. Since a method call has more overhead in Objective-C, inefficient code is affected more than that same code in C, so people jump to the conclusion that Objective-C is slow and the followup fix is to move to C.

Inefficient code will always be slow. However, since a method invocation has more overhead in Objective-C than C, C++ or most other static runtimes, people who’ve just learned about the Objective-C runtime will often blame the runtime and switch to C.

Unfortunately, I need to be blunt here: If you think your code is slow because Objective-C is slow, you’re wrong. Your code is slow because you wrote slow code.

The Bigger Picture

In day-to-day programming, you may end up in a situation in which you’re looking at objc_msgSend and thinking that Objective-C is too slow. If this is the case, there are only two outcomes.

  1. You wrote inefficient code and need to go fix it. It sucks, but this will be 99.9% of cases.

  2. You screwed up big-style and Objective-C genuinely isn’t the correct tool for the job at hand. This sucks even more, because you misunderstood your problem and now have to scrap all of the Objective-C code you wrote and write it again in something else. This will be very rare.

Thinking about one implementation detail in your runtime (and with Objective-C, that invariably becomes objc_msgSend) is not thinking about the bigger picture and you’ll go down a horrible road of writing little sections of code in C, copying data back and forth between the two runtimes and creating a big horrible mess. You’ll start thinking things like “Accessing this instance variable directly is way faster than using Objective-C properties. This’ll make my app fast!”, and will fall down that horrible trap of pre-optimising stuff that doesn’t actually make a difference.

Instead, you need to be thinking about the behaviour of your runtime and how it affects your problem. Ideally, you should do this before starting to implement your solution to that problem.

Problem 1: I looked at Instruments and objc_msgSend is 10% of my application’s usage!

Is your application actually slow? If not, who cares? If you’re making a lot of method calls, this is to be expected.

This problem has nothing to do with the Objective-C runtime.

Problem 2: I profiled my application when it’s acting slow, and it’s some Objective-C code slowing it down!

Make your code more efficient. Depending on the nature of the problem, one or more of these might help:

  • Stop doing obviously silly things. Loading huge images lazily on the main thread, for instance.

  • Learn about complexity and how to write more efficient code.

  • Learn about perceptive performance. For example, if you do your work on a background thread and keep your UI fluid in the meantime, your application won’t feel slow. It’s better that your application remains fluid and takes ten seconds to do its work than it locking up for five seconds. Five seconds is indeed faster, but it feels a lot slower when an application’s UI is blocked.

This problem also has nothing to do with the Objective-C runtime.

Problem 3: I’ll be working in a realtime thread and I’m worried about the fact that Objective-C is a dynamic runtime!

Aha! Now we’re getting somewhere.

Objective-C isn’t slow. It simply isn’t. However, one thing that it is is dynamic. Objective-C’s dynamic runtime gives it all the wonderful features we adore, but it isn’t appropriate for some uses.

Real-time threads can be one of those uses.

But not because Objective-C is slow.

Because it’s dynamic.

A good example of a realtime thread is a Core Audio render thread. When I get that callback from Core Audio asking me for more audio data to play, I have x milliseconds to return that data before the audio pipelines run out of buffer and an under-run occurs, causing playback stuttering.

Because that number is measured in milliseconds rather than nanoseconds, Objective-C would be perfectly fast enough to perform it. In fact, if I wrote my audio code in Objective-C it’d likely work just fine. However, because I’m under contract to return data in a certain time, I can’t safely use a dynamic runtime like Objective-C to implement it.

C, for instance, has a static runtime and a fixed overhead for method calls. Copy some stuff to the stack, jump to the function’s memory offset, and away you go.

Objective-C, though, is dynamic and you can’t guarantee a thing. Anyone can load a class into the runtime that overrides -methodSignatureForSelector: and redirects your method calls elsewhere, or can use something like method_exchangeImplementations() to swap out method implementations entirely. This means that at runtime, you can’t count on anything being what you thought it was.

So, because I’m under contract to return within a certain time and I personally believe it’s bad form to use a dynamic runtime in such a situation, I choose to implement that problem entirely in C.

Conclusion

The decision to use the C language for problem 3 was entirely a high-level decision based on the behaviour of the Objective-C runtime compared with the problem at hand.

This is really how you should be thinking about your runtime. If you get to the point where you’ve got some slow code and are noticing an implementation detail of the runtime pop up, you need to go back and code better. If you’ve coded the best you possibly can, you need to learn to code better. If you’ve learned to code better and the runtime is still getting in the way, you chose the wrong runtime for the entire problem.

Notice that this entire post is devoid of any code samples. This is intentional — the point of this post is that you choose your language and runtime first based on your problem, not second because of a problem in your code. If you’re switching languages and runtimes halfway through a problem, the issue is your approach to solving the problem, not the language or runtime you’re using to solve it.

My Life in Pictures

In 2005, two things happened: I bought a kickass car, and Aperture 1.0 was released at the rather eye-watering price of £349. At the end of the year, I was playing with Aperture’s photo book tool by putting some pictures of my car in a book and inadvertently started a tradition I simultaneously despise and adore: My Life In Pictures.

Every year, I make a photo book of the interesting stuff that year brought — an interesting outcome is that a book’s thickness is a great way to see how interesting a year was for me. 2009 is an embarrassing 35 pages, whereas 2008 and 2006 are both pushing 100.

I despise them during the construction phase because they take an age to make — over a week of evenings for a long one. By the end I’m so sick and tired of looking at those damn photos I hesitate to buy the book I put together. Thankfully I’ve learned to push through that feeling and order, because every single time a book arrives I adore the result and happily add it to the slowly expanding collection.

Inside 2012 In Pictures.

These things are expensive — 2012 In Pictures is 63 pages and ordered through Blurb, which came to around €55 plus tax and shipping. However, they’re well worth it and I’m already enjoying going back a few years ago and flicking through what I did in the past. They also make great gifts for family members who are into that sort of thing.