March 24th, 2015

NSConference 7

“I checked the version of your presentation with the video in it, and it works fine. Shall we just use that one, then?”

Panic set in, again. Scotty was already onstage and in the process of introducing me, so I had to think fast. I’d been accepted to give a “blitz talk” — that is, a short, 10-minute long presentation — at NSConference this year, and I’d put a little video clip that at best could be described as “stupid” into my slides. I thought it was funny, but I was so worried that it’d be met with a stony silence by the hundreds of attendees that I’d also provided a copy without the video.

At least it’ll be an interesting story to tell, I thought to myself, and confirmed that I’ll use the version with the video before stepping out into the blinding lights of the stage.

Here we go!


NSConference has always been about community. I’ve been fortunate enough to attend a number of them over the years, following it around the UK from Hatfield to Reading to Leicester. I’ve met a number of friends there, and it’s always inspiring. The mix of sessions normally has a fairly even distribution of technical and social topics, and this year was no exception — some fantastic speakers gave some wonderfully inspiring talks that really touched close to home, and others gave some fascinating technical talks on the old and the new.

Rather than list them now, I’m going to do a followup post when the NSConference videos are released that’ll link to my favourite talks and discuss why I found them so great.

However, the talks are only half of it. I’m pretty shy around new people, and my typical conference strategy is to sit with people I already know during the day, then hide in a corner or my hotel room during the evenings. This time, however, I was determined to at least try to make friends, and with little effort I found myself speaking to so many new people I can barely remember them all. Everyone was so friendly and so supportive, and I had a huge number of really interesting conversations with people from all over the world.


A joke is a great way to break the ice, someone once said. I start with “The lunches aren’t so light if you go back for thirds, are they?”1, referencing the fact we were given a light lunch that day in preparation for the banquet later. Sensible chuckle from the audience. Alright, maybe my video won’t flop after all!

“Hello everyone,” I continued, “My name is Daniel and for the past four years I’ve been working as a Mac and iOS developer at Spotify. And four days ago — last Thursday — I left to become an Indie developer. Today, I’m—”

I was interrupted by a huge round of applause that went on long enough to mask my stunned silence. This is what NSConference is about: hundreds of friends and strangers coming together to support one another in whatever we’re doing. One of the larger challenges in what I’m doing is the solitude — I left a job where I’m interacting with a lot of people every day to one where I sit alone in a corner of my house. As I stand on the stage, the applause lifts me up and drives home that while I may physically be on my own, I have a huge community of peers that are right behind me and are willing me to succeed.

As the applause dies down, I do a “Thank you, goodnight!” joke to move around the stage and regain my composure. Thirty seconds later, we arrive at my stupid video.

My thumb hovers over the button to advance the slide and start the video. If I double-click it, it’ll skip the video! A moment’s hesitation…

Click.

That two second video clip got what I think was one of the biggest laughs of the conference, and I was so relieved I even started laughing at it myself.

Right! Time to get my shit together — I’m supposed to be sharing information!


At the end of the conference, heartfelt things were said onstage as the sun set on the final NSConference — there wasn’t a dry eye in the house. During this, staff handed a glass of whiskey to every single person in the audience. At the very end, Scotty held a toast, then left the stage as we clinked glasses.

The last NSConference came to a close with the sound of hundreds of people clinking glasses in toast to seven years of incredible experiences. The sound resonated around the hall for a number of minutes before eventually subsiding, and is something I’ll never forget.

As a tribute to the conference and the work the organisers put in, the community is banding together to raise money for Scotty’s favourite cause, Water.org, which has the goal of providing clean water to everyone who needs it. You can donate at the NSConference 7 fundraiser page.

Clink.

  1. It should be noted that my talk wasn’t really scripted so I’m recounting what I said from memory. When the video is released it’ll likely prove that I’m misremembering my exact wording. The gist will be the same, though.


March 10th, 2015

Secret Diary of a Side Project: The Refactor From Hell


Why I need a designer: Exhibit A.

THIS BUTTON.

This innocuous little button cost me a week. Let that settle in. A week.

It’s a simple enough premise — when the user gets presented a dialog like this, you should give them a way out. Presenting a button-less dialog is all kinds of scary — what if the camera crashes and doesn’t give the expected response, or any response at all? Sure, I can guard against that, but still.

So, it’s settled! I’ll implement a Cancel button so the user can back out of pairing with their camera. What a completely logical and easy thing to do.

PROGRAMMING!

Here’s the problem I faced:

Typically, when you connect to a camera you send it a message to initialise a session, then wait for a success response. This normally takes a small number of milliseconds, but when the camera is in pairing mode it won’t respond at all until the user has gone through a few steps on the camera’s screen.

All we need to do is sever the connection to the camera while we’re waiting, right? Easy enough. However, the architecture of my application has it working with the camera in a synchronous manner, writing a message then blocking until a response is received. All this is happening on a background thread so it doesn’t interfere with the UI, and since the camera has a strict request-response pattern, it works well enough. However, in this case, I can’t sever the connection on the camera’s thread because it’s completely blocked waiting for a response. If I try to do this from a separate thread, I end up with all sorts of nasty state — dangling sockets and leaked objects.

The solution to this sounds simple — instead of doing blocking reads, I should schedule my sockets in a runloop and use event-based processing to react when responses are received. That way, nothing will ever be blocked and I can sever the connection cleanly at any point without leaving dangling sockets around.

Easy!


Seven hours later I’m sitting at my desk with my head in my hands, wishing I’d never bothered. It’s 11pm, and later my wife tells me she’d approached me to come play video games but decided I looked so grumpy I’d be best left alone. I have no idea why it’s not working. I’m sending the exact same bytes as I was before, and getting the same responses. It actually works fine until traffic picks up — as soon as you start to send a lot of messages, random ones never get a response.

Well after midnight, I throw in the towel. I’d been working at this one “little” problem nonstop for eight hours, my code was a huge mess and I almost threw away the lot.

“I’m such an idiot,” I told my wife as I got into bed, “I even wrote about this on my blog, using the exact code I’m working on as an example”.

Yup, this is that old but reliable code I wrote about a couple of months ago. The class I said I’d love to refactor but shouldn’t because it worked fine.

One way of proving a hypothesis, I suppose.

As I was drifting off to sleep, I had an idea. I decided it could wait until the morning.


I slumped down into my chair the next morning and remembered my idea. Twenty minutes later, it was working like a charm1.

Sigh.

So, now it’s working and a darn sight better looking than my old code. However, the two years’ worth of confidence and proven reliability that I had with the old code has vanished — it seems to work, yes, but how can I be sure? Maybe there’s bugs in there that haven’t shown themselves yet.

If You Don’t Have Experience, You Need Data

I’ve been writing unit tests here and there for parts of my app where it makes sense.

“Business logic” code for the app is simple enough to test — instantiate instances of the relevant classes and go to town:

CBLShutterSpeed *speed = [[CBLShutterSpeed alloc] initWithStopsFromASecond:0.0];
XCTAssert(speed.upperFractionalValue == 1, @"Failed!");
XCTAssert(speed.lowerFractionalValue == 1, @"Failed!");

CBLShutterSpeed *newSpeed = [speed shutterSpeedByAddingStops:-1];
XCTAssert(newSpeed.upperFractionalValue == 1, @"Failed!");
XCTAssert(newSpeed.lowerFractionalValue == 2, @"Failed!");

Parsing data given back to us by the camera into objects is a little bit more involved, but not much. To achieve this, I save the data packets to disk, embed them in the test bundle and load them at test time. Since we’re testing the parsing code and not that the camera gives back correct information, I think this is an acceptable approach.

-(void)test70DLiveViewAFRectParsing {
    NSData *rectData = [NSData dataWithContentsOfFile:[self pathForTestResource:[@"70D-LiveViewAFRects-1.1.1.dat"]];
    XCTAssertNotNil(rectData, @"afRect data is nil - possible integrity problem with test bundle");

    NSArray *afAreas = [DKEOSCameraLiveViewAFArea liveViewAFAreasWithPayload:rectData];
    XCTAssertNotNil(afAreas, @"afRects parsing failed");

    XCTAssertEqual(31, afAreas.count, @"Should have 31 AF areas, got %@", @(afAreas.count));

    for (DKEOSCameraLiveViewAFArea *area in afAreas) {
        XCTAssertTrue(area.active, @"Area should be active");
        XCTAssertFalse(area.focused, @"Area should not be focused");
    }
}

Alright, so, how do we go about testing my newly refactored code? It poses a little bit of a unique problem, in that my work with this camera is entirely based on clean-room reverse engineering — I don’t have access to any source code or documentation on how this thing is supposed to work. This means that I can’t compile the camera’s code for another platform (say, Mac OS) and host it locally. Additionally, the thing I’m testing isn’t “state” per se — I want to test that the transport itself is stable and reliable, that my messages get to the camera and its responses get back to me.

This leads to a single conclusion: To test my new code, I need to involve a physical, real-life camera.

Oh, boy.


Unit testing best practices dictate that:

  • State isn’t transferred between individual tests.
  • Tests can execute in any order.
  • Each test should only test one thing.

The tests I ended up writing fail all of these practices. Really, they should all be squished into one test, but a single test that’s 350 lines long is a bit ungainly. So, we abuse XCTest to execute the tests in order.

First, we test that we can discover a camera on the network:

-(void)test_001_cameraDiscovery {
    XCTestExpectation *foundCamera = [self expectationWithDescription:@"found camera"];

    void (^observer)(NSArray *) = ^(NSArray *cameras) {
        XCTAssertTrue(cameras.count > 0);
        _camera = cameras.firstObject;
        [foundCamera fulfill];
    };

    [[DKEOSCameraDiscovery sharedInstance] addDevicesChangedObserver:observer];

    [self waitForExpectationsWithTimeout:30.0 handler:^(NSError *error) {
        [[DKEOSCameraDiscovery sharedInstance] removeDevicesChangedObserver:observer];
    }];
}

…then, we make sure we can connect to the found camera:

-(void)test_002_cameraConnect {
    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
    XCTestExpectation *connectedToCamera = [self expectationWithDescription:@"connected to camera"];

    [self.camera connectToDevice:^(NSError *error) {
        XCTAssertNil(error, @"Error when connecting to camera: %@", error);
        [connectedToCamera fulfill];
    } userInterventionCallback:^(BOOL shouldDisplayUserInterventionDialog, dispatch_block_t cancelConnectionBlock) {
        XCTAssertTrue(false, @"Can't test a camera in pairing mode");
    }];

    [self waitForExpectationsWithTimeout:30.0 handler:nil];
}

(I’m a particular fan of that XCTAssertTrue(false, … line in there.)

Next, because we’re talking to a real-life camera, we need to make sure its physical properties (i.e., ones we can’t change in software) are correct for testing:

-(void)test_003_cameraState {
    XCTAssertNotNil(self.camera, @"Need a camera to connect to");
    XCTAssertTrue(self.camera.connected, @"Camera should be connected");

    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeAutoExposureMode] intValue], EOSAEModeManual,
                   @"Camera should be in manual mode for testing.");

    XCTAssertEqual([[self.camera valueForProperty:EOSPropertyCodeLensStatus] intValue], EOSLensStatusLensAvailable,
                   @"Camera should have an attached lens for testing");

    DKEOSFileStorage *storage = self.camera.storageDevices.firstObject;
    XCTAssertTrue(storage.capacity > 0, @"Camera should have an SD card inserted for testing.");
    XCTAssertTrue(storage.availableSpace > 100 * 1024 * 1024, @"Camera storage should have at least 100Mb available for testing.");
}

Once the camera is connected and verified to be in an agreeable state, we can start testing.

  • In order to test against the case of large amounts of traffic causing dropouts that drove me to insanity that night, I run through every single valid value for all of the exposure settings (ISO, aperture, shutter speed) as fast as I possibly can.

  • To test event processing works correctly, I test that streaming images from the camera’s viewfinder works.

  • To test filesystem access, I iterate through the camera’s filesystem.

  • To test commands, I take a photo.

  • To test that large transfers work, I download the photo the previous test took - about 25Mb on this particular camera.

  • And finally, I test that disconnecting from the camera works cleanly.

As you can see, this is a pretty comprehensive set of tests — each one is meticulous about ensuring the responses are correct, that the sizes of the data packets received match the sizes reported by the camera, etc — they’re essentially an automated smoke test.

The next challenge is to get these to run without human intervention. I can’t just leave the camera on all the time — if it doesn’t receive a network connection within a minute or two of powering on it’ll error out and you need to restart the Wifi stack to connect again — something not possible without human intervention. Perhaps a software-controlled power switch would allow the tests to power on and off the camera at will. However, that’s a challenge for another day.

I TOLD YOU SO, DAMNIT

So. In an earlier post I talked about being restrained when you think about refactoring code, and my ordeal here is exactly why. At the beginning it looked simple enough to do, but I ended up losing way too much time and way too much sleep over it, and when it finally appeared to work I had no data on whether it was any good or not. If I’d gone through all of that with no good reason it would’ve been a complete waste of time and energy.

But! Thanks to all this work, you can now cancel out of camera pairing from your iOS device! It’s a disproportional amount of work for a single button, but that’s the way software development goes sometimes — no matter how obvious the next task might look, tomorrow’s just a mystery, and that’s okay. It’s what makes it fun!

Plus, I now have a decent set of smoke tests for communicating with a real-life camera, which is something I’ve been wanting for a long time — a nice little silver lining!

Epilogue

After implementing all this, I decided to have a look at how the camera’s official software approached this problem, UI-wise.

It looks like a floating panel, but it behaves like a modal dialog. There’s no way to cancel from the application at all and if you force quit it, the software ends up in a state where it thinks it isn’t paired and the camera thinks it is paired, and the two will flat-out not talk to one another.

The mobile app can’t possibly be this bad, I thought, and went to experiment. There’s no screenshot here because there is no UI in the iOS app to help with pairing at all — it just says “Connecting…” like normal and you need to figure out that you need to look at the camera on your own.

It’s like they don’t even care.


Next time on Secret Diary of a Side Project, we’ll talk about how to make the transition to working full-time on your side project at home in a healthy way, both mentally and physically.

  1. The problem, if you’re interested, is that the camera throws away any messages received while it’s processing a prior message. This was accidentally worked around in my old code by blocking while waiting for a response. The solution was to maintain a message queue and disallow a message to be sent until a response to the previous one has been received.


February 25th, 2015

Rebrand

Welcome to my new blog!

It’s like my old blog, but with a much lighter appearance and hopefully provides a nicer reading environment. It should also be faster, and much better on mobile. As well as nearly all of my old posts, I’ve added a spiffy new More About Me page with a succinct version of my life story, if you’re interested. I’ve also spruced up the My Apps and Archive pages.

I’ve tried my very best to make sure all the links from my old blog work with this new one, but if you spot anything amiss I’d appreciate you getting in touch with me on Twitter or emailing blog at this domain and letting me know.

I actually ended up going through an interesting journey while putting this together. To make sure that every post was formatted properly in the new engine, I read through every single one of my posts all the way back to 2004 — and let me tell you, ten years ago I was an idiot. I seriously considered removing all the posts I found embarassing, but in the end I decided that the journey is just as important as the destination, so they stayed. The only posts I removed where ones that were nothing but links to now-defunct websites.

Technical Details

My previous blog was generated by Octopress, which is a blogging product built on top of Jekyll. However, Octopress’ main selling point for newbies to this whole thing (i.e., me a few years ago) is also its biggest drawback — it’s a complete blogging platform out-of-the-box. This makes diving in and customising it extremely daunting, rather like being presented with a car, a spanner and being told to replace the clutch plate. I did manage to customise a couple of little things on my old site, but not much.

So, a couple of weeks ago I sat here, new theme in hand, ready to try to put it into Octopress. It was soon apparent that I’d basically have to rip the entire thing apart to fully understand what was going on, and if I was going to do that, why not look at alternatives?

I’d recently tried out another static site compiler called nanoc for another project of mine, and really liked it. Where Octopress provides a fully featured blog out-of-the-box, nanoc provides nothing. The default site is literally a white “Hello World” page with no CSS at all. While this is daunting at first, it’s actually quite liberating — it took me about a week to put this whole thing together from scratch, and I now know every intimate detail about it which makes me really comfortable customising it in any way I need.

How The Site Is Put Together

  • There are three “things” in this entire site:

    1. Posts. These are markdown files.

    2. Pages. These are HTML fragments.

    3. Special items like the RSS feed.

  • Posts are put through a markdown parser (kramdown) then wrapped with the site’s template.

  • Pages are rendered pretty much as-is with nothing special going on other than being wrapped in the template. These include the About, Apps, and Archive pages, as well as the site’s home page.

  • When the template is rendered, pages containing the in_menu tag are placed in the site menu. This allows me to have “hidden” pages (like the 404 page) without any extra work.

  • Binary files (images and the like) live in a submodule to the blog’s source repo. Yes, git isn’t great at binaries (and there’s over 300Mb of them for this site), but it works alright for my needs. These files get copied to the output directory with no processing at all.

I’m really pleased with the results of my work, and it gives me greater control over my presence on the web. Over time, I hope to add more features to the site as I work on my web skills.


February 13th, 2015

Secret Diary of a Side Project: Getting To 1.0

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

In this post, I’m going to talk about something that strikes fear into the heart of any programmer: planning. You won’t get to 1.0 without it!


If you’re anything like me, it’s likely that you have some form of issue tracker for your side project, detailing various bugs to be fixed and features to be added. In my instance, that ended up being a sort of rolling affair — I’d fix a bunch of things, see that my issue list was diminishing, then spend a while with the app prodding around until I found more things to add to the tracker. This was a perfectly acceptable approach in the beginning.

However, shortly after I committed to do this full-time, I realised I had no longer-term plan. So, I sat down and decided that I’d try to release 1.0 relatively soon after going full-time, allowing plenty of time to gain feedback from real photographers. You see, I have tons of feature ideas but until photographers tell me what they think, I don’t really have any data to tell me if these ideas are any good. Releasing a 1.0 early allows the app to be shaped by its users, rather than my idea of what I think users want.

This is the result, based on nothing more than a loosey-goosey feeling of the state of the project so far:

Milestone Date
Start collecting beta invites 2015-03-10
First beta release 2015-03-24 → 28
Post-beta questionnaire 2015-04-28
1.0 App Store submit 2015-05-05

Of course, I’ll be amazed if those deadlines stick. Still! It’s great to have something to aim for. I felt much better about myself.

…for a while.

A few days later I looked at those dates and started to feel a bit of dread. That March 10th date is when I really commit to releasing something – it’s when my marketing starts! I had no idea if I’d be able to do it or not. Eventually I realised the problem — the tasks in my issue tracker didn’t connect my project from where it is now to that 1.0 on May 5th.

It’s time to do some serious planning!

Shhh… Don’t Say “Agile”

I have a love-hate relationship with Agile. My first exposure to it was when I started at Spotify in early 2011. The company was very small at the time, and we were using… scrum, I think? I forget. Anyway, as the company grew the thing we were using turned out not to work so well. So, we tried a new thing. Then another new thing. Then the first new thing again but with a slight modification. Eventually, I flat-out stopped caring. “Just tell me how you want me to stick the notes on the wall, and I’ll be fine”, I’d say.

Fast-forward a few years, and a fellow named Jonathan joined the company. He’d written a book on Agile and handed out some copies. I took one with moderate-at-best enthusiasm, which then sat on my desk gathering dust. A few weeks later, he did a talk on a thing he called the “Inception Deck”, a method of planning out your product at its inception stages.

“This is perfect for Cascable!” I thought, and started furiously scribbling notes. After his talk, I told him I thought it was great. “Oh, really? I’m happy you think so — it’s all from my book though.”

At that point, I returned my copy of his book and bought an eBook of it instead, partly because I feel uncomfortable furthering my own app on something my employer paid for, but mainly because I like supporting good work.

I feel really uncomfortable plugging things on this blog — it’s not what it’s for. However, Jonathan’s book has immensely helped me as an independent developer trying to get an app out into the world, and a good deal of this post is inspired by things I learned from it. It’s called The Agile Samurai: How Agile Masters Deliver Great Software, and you can find it here at the Pragmatic Bookshelf.

Step One: Figure Out What You Want To Sell

If you were planning your app from the beginning, you’d start by planning what you want your 1.0 to actually be. A side project is completely the opposite of that — you just create a new project and go, plucking ideas out of your head and going with them.

However, that isn’t sustainable if you want to ship a quality product, no matter how much you claim to “live in the code”. At some point you’re going to have to stop and figure this stuff out, which can be pretty daunting if you’re just chugging along in your code editor.

The “Inception Deck” I spoke about earlier really helped me with this. I won’t go into it in detail — it’s in the book I mentioned above as well as on the author’s blog – but it’s basically a set of small tasks you can do to really help kick a project off in the right direction.

Now, I’m not kicking off a project at all, and some of the items in the Inception Deck are geared a bit towards teams working on one project rather than the lone developer, but still — if some of the tasks help bring clarity to my project, I’m all for it!

Alright, it’s time to jump out of development and pretend I’m doing this properly by doing the planning at the beginning. I cherry-picked the most relevant tasks from the Inception Deck, and here’s what I came up with, more or less copy and pasted from my Evernote document:

The Inception Deck for Cascable 1.0

Why Are We Here?

This task helps establish why this project exists to start with.

The applications that come with WiFi-enabled cameras tend to be pretty terrible. We can do better, and make a WiFi connection an indispensable tool on a camera rather than a toy.


Elevator Pitch

This is a fairly standard thing in the software world these days. Describe the product in 30 seconds.

For photographers who need intelligent access to their camera and photos in the field, Cascable is an iOS app that connects to the camera over WiFi and opens up a world of possibilities. Unlike current apps, Cascable will develop and evolve to become an easy-to-use and indispensable tool for amateur and professional photographers alike.


The Not List

This is one is new to me and was incredibly helpful. Defining what isn’t in scope for 1.0 can be as useful as defining what is.

In Scope for 1.0 — Things that will definitely make it.

  • Remote control of the basics: exposure control, focus and shutter.
  • Useful overlays for the above. Thirds grid, histogram, AE mode, AF info.
  • Calculating exposure settings for ND filters and astrophotography.
  • Saving calculation presets.
  • Viewing photos on the camera in the list.
  • Downloading photos to the device.
  • Viewing downloaded photos fullscreen, deleting downloaded photos.
  • Sharing downloaded photos and opening them in another app.
  • Apple Watch widget for triggering the shutter.

Not In Scope for 1.0 — Things that definitely won’t make it.

  • Cameras that aren’t Canon EOS cameras.
  • Cloud functionality.
  • Automatic downloading.
  • Support for videos in Files mode.

Unresolved — Things I’m not sure about.

  • Second screen mode for AppleTV, etc.
  • Applying Calculations Mode results to the camera.

What Keeps Me Up At Night

This exercise was also new to me. What things should you worry about, and which of those are beyond your control?

  • Not having dedicated QA.
  • Keeping “on the rails” and getting everything done properly and on time.
  • App Store rejection.
  • Canon getting uppity.

The first two of those are things I know can fix myself already. The fear of App Store rejection is pretty much life as normal for iOS development, so there’s no real need to worry about that as long as I’m familiar with Apple’s guidelines and don’t bump into the edges of the (admittedly sometimes vague) rules. That last one is more nuanced, and something I need to get legal advice about. That is where I should concentrate my energy into gaining knowledge.

Conclusion

So, what’s the benefit of writing all this down? Well, I’ve understood what this project is about the whole time, but succinctly describing it to someone else is a bit of a challenge. Not having answers to questions like “Will you support X camera?” or “Can I work with video?” was a bit embarrassing. Now, I can answer “Not at 1.0, no.” with confidence. Sure, I don’t need to answer to anyone else while making my own app, but being able to answer questions to others with confidence does great things for your own internal confidence, too.

Step Two: Fill The Gap Between Now And Then

Alright, so I’ve got an issue tracker full of tasks and a ship date. I also have a general overview of what Cascable 1.0 should be with the Inception Deck. However, I still haven’t brought all this together to form a set of directions to take me from where I currently am on the project to where I want to be for 1.0.

The problem is, as the lone developer of an app, I’m just in too deep. I can’t see the wood for the trees, and various other clichéd sayings about not having a clear view of the whole situation. I came up with all that stuff above completely on my own. How do I know if it’s any good, or just pure garbage?

What I need is an outsider.


Don’t be fooled, she packs a mean punch.

Meet Alana (that’s “Ah-lay-na”, not “Ah-lar-na”), who has agreed to be Cascable’s Product Owner while I get to 1.0. She’s also my wife, so I suppose she’s also the Product Owner of, well, me. She’s agreed to have meetings with me once every two weeks, splitting my journey into Agile-like sprints. I’ll get to explain why I didn’t meet any missed targets and why, which targets I did meet and what targets I plan to meet in the next two weeks.

However, we’re getting ahead of ourselves — my current problem is that even though I have a nice Inception Deck I don’t know exactly what 1.0 should be, never mind how to get to it. Alana also had a concern: “How can I be your Product Owner if I don’t know what the product is?”

It turns out that my problem and her concern can be solved in one step. The reason my issue tracker doesn’t connect between the current state of the project and 1.0 is because I just picked ideas out of my head when I ran out of tickets in my issue tracker. The Inception Deck helps, but it’s still a bit wishy-washy — I need a well thought-out master list of stories to work against. A good way to have Alana know the product? Have her make the list!

Business Time

One Saturday, we sat opposite one another at the dining room table with a pile of Post-It notes, some pens and a camera.

“Alright, “ I said, “You’ve just bought a camera and have realised how crappy the supplied app is. You’re going to hire me to write you an app that enhances your photography experience. I want you to tell me what it should do, and we’ll write each thing down on a note.”

She picked up her camera, prodded at it a bit then said “Erm… I guess it should connect to camera, right?”

Great! Our first story — but this was the very first page, not where the storyline ends. We spent the next hour talking about photography and she made feature suggestions along the way, mainly based on her previous photography experiences. I didn’t make a single contribution to the notes, other than to ask “Why do you want the app to do that?” to make sure that information got written down. Each idea got a note, and after an hour we had a fairly sizeable pile.

After we were done, I quickly added some more notes that contained features I’d already written but she didn’t independently come up with, then started the second half of the exercise:

“Now, I want you to put these all in a line in order of importance to you.”

Again, I didn’t interrupt other than to help when she wasn’t sure. “Should this go higher than Delete photos from the camera or lower?”

This is what we ended up with:

For the first time, I sat back and actually studied the notes. I was floored. In front of me was a complete journey to 1.0 and beyond. Features I hadn’t even thought of were high up the list, and of course they were — they were so stupidly obvious. Conversely, features I’d spent a fair amount of time working on (in particular, a “Night Mode” for the app) were right down towards the bottom, probably past the cutoff point for 1.0, and looking at the list I completely agreed with it being down there. In fact, I couldn’t really argue with the order of the notes at all once I heard the reasoning behind Alana’s chosen position.

I’ve been working on this thing for well over a year and a half now, and two hours with someone with fresh eyes completely changed the project and set it off on the journey to 1.0 with a flying start.

What’s better, every single outstanding bug or feature in my issue tracker fit into one of these Post-It stories perfectly. The app doesn’t handle a camera in pairing mode quite right? Well, that goes in the “Connect to camera” story. Oh, crap — that’s the most important story of them all, I should fix that right away!

Step Three: There’s No Step Three!

This is an absolute lie. Step three is the hardest one of all. Now you have a spiffy plan, you have to execute it.

My project isn’t a “side project” any more. Far from it — it has deadlines, a prioritised story list, and a product owner. Between the start of this post and now, it’s transformed into a fully-fledged software project, and I’m letting it down by only working on it in my spare time. Four weeks from today, however, that’s all going to change!


Next time on Secret Diary of a Side Project, we’ll swing back to some coding and talk about what happens when you ignore my advice and decide to refactor a piece of code that really doesn’t need it.


February 8th, 2015

Stripping Unwanted Architectures From Dynamic Libraries In Xcode

Since iOS 8 was announced, developers have been able to take advantage of the benefits of dynamic libraries for iOS development.

For general development, it’s wonderful to have a single dynamic library for all needed architectures so you can run on all your devices and the iOS Simulator without changing a thing.

In my project and its various extensions, I use Reactive Cocoa and have it in my project as a precompiled dynamic library with i386 and x86_64 slices for the Simulator, and armv7 and arm64 for devices.

However, there’s one drawback to this approach - because they’re linked at runtime, when a dynamic library is compiled separately to the app it ends up in, it’s impossible to tell which architectures will actually be needed. Therefore, Xcode will just copy in the whole thing into your application bundle at compile time. Other than the wasted disk space, there’s no real drawback to this in theory. In practice, however, iTunes Connect doesn’t like us adding unused binary slices:

So, how do we work around this?

  • We could use static libraries instead. However, with multiple targets and extensions in my project, it seems silly to bloat all my executables with copies of the same libraries.

  • We could compile the library from source each time, generating a new dynamic library with only the needed architectures for each build. A couple of things bother me about this - first, it seems wasteful to recompile all this non-changing code all the time, and the second is that I like to keep my dependencies static, and making new builds each time means I’m not necessarily running stable code any more, particularly if I start mucking around in Xcode betas. What if a compiler change causes odd bugs in the library? It’s a very rare thing to happen, but it does happen, and I don’t know the library’s codebase well enough to debug it.

  • If we don’t have the source to start with, well, we’re kinda out of luck.

  • We could figure out how to deal with this at build-time, then never have to think about it again. This sounds more like it!

Those Who Can, Do. Those Who Can’t, Write Shell Scripts

Today, I whipped up a little build-time script to deal with this so I never have to care about it again.

In my project folder:

$ lipo -info Vendor/RAC/ReactiveCocoa.framework/ReactiveCocoa

→ Architectures in the fat file: ReactiveCocoa are:
    i386 x86_64 armv7 arm64

After pushing “build”:

$ lipo -info Cascable.app/Frameworks/ReactiveCocoa.framework/ReactiveCocoa

→ Architectures in the fat file: ReactiveCocoa are:
    armv7 arm64

Without further ado, here’s the script. Add a Run Script step to your build steps, put it after your step to embed frameworks, set it to use /bin/sh and enter the following script:

APP_PATH="${TARGET_BUILD_DIR}/${WRAPPER_NAME}"

# This script loops through the frameworks embedded in the application and
# removes unused architectures.
find "$APP_PATH" -name '*.framework' -type d | while read -r FRAMEWORK
do
    FRAMEWORK_EXECUTABLE_NAME=$(defaults read "$FRAMEWORK/Info.plist" CFBundleExecutable)
    FRAMEWORK_EXECUTABLE_PATH="$FRAMEWORK/$FRAMEWORK_EXECUTABLE_NAME"
    echo "Executable is $FRAMEWORK_EXECUTABLE_PATH"

    EXTRACTED_ARCHS=()

    for ARCH in $ARCHS
    do
        echo "Extracting $ARCH from $FRAMEWORK_EXECUTABLE_NAME"
        lipo -extract "$ARCH" "$FRAMEWORK_EXECUTABLE_PATH" -o "$FRAMEWORK_EXECUTABLE_PATH-$ARCH"
        EXTRACTED_ARCHS+=("$FRAMEWORK_EXECUTABLE_PATH-$ARCH")
    done

    echo "Merging extracted architectures: ${ARCHS}"
    lipo -o "$FRAMEWORK_EXECUTABLE_PATH-merged" -create "${EXTRACTED_ARCHS[@]}"
    rm "${EXTRACTED_ARCHS[@]}"

    echo "Replacing original executable with thinned version"
    rm "$FRAMEWORK_EXECUTABLE_PATH"
    mv "$FRAMEWORK_EXECUTABLE_PATH-merged" "$FRAMEWORK_EXECUTABLE_PATH"

done

The script will look through your built application’s Frameworks folder and make sure only the architectures you’re building for are present in each Framework.

Much better! Now I can throw fat dynamic libraries at my project that contain all the architectures I’ll ever need, and my build process will deal with which architectures are appropriate at any given moment.


February 4th, 2015

Secret Diary of a Side Project: Cold, Hard Cash

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.


Moolah. Cheddar. Bank. Cash. Benjamins. There are so many slang terms for money it’s hard to keep track. It’s not surprising, really — people typically dislike talking about money, and it’s human nature to try and make light of something that’s not even slightly fun. It’s the same reason people make jokes at funerals, I suppose.

Money puts a roof over your head, food on the table and is a required thing to function in modern society (yes, Bitcoin still counts as money - there’s always that one guy, isn’t there?) successfully. A fool and his money are easily parted, the saying goes. Nobody wants to be a fool.

However, if you’re serious about your project you’re going to have to spend money on it at some point. If you don’t, you’re going to damage it in ways you might not be able to see until much later.

It’s very difficult — perhaps even nonsensical — to spend nontrivial amounts of money on something that’s solely a side project. However, now we’re getting serious about this thing it’s time for that to change — something that can still be really difficult to do even though we’ve shifted our perception of this project in other areas. It really doesn’t help that money tends to have a tendency to suck the fun out of things, either. It cuts fun stuff out of holidays, limits the size of your TV, and gets in the way of that Ferrari that should be on your drive. Spending money on software I’m perfectly capable of writing on my own? Where’s the fun in that?


This post is really text-heavy, so please enjoy this photo of the Northern Lights my wife took while you take a breather.

Invest In Yourself

Time to be blunt: Without a bit of money put into it, your project will never be as good as it could be. If you refuse to put money into it at all, you should really stop now, or at least reverse course and keep your project as a side project. It’s a harsh thing for me to say, but I really believe it.

Trust yourself.

If you’re there “in theory” but are still struggling with it, try my patented FREE MONEY technique.

NOTE: I should probably point out that I’m not a financial advisor, and you really shouldn’t listen to me. This will become apparent in the next paragraph, but still — I’m an idiot. Go talk to someone smart when deciding what to do with your own money. Like I said, money is serious business.

You know when you find a bit of money in your coat you completely forgot was there? Your immediate reaction is “Sweet, free money!”, despite the fact that it was your money to begin with and you’re exactly back where you were before you lost it. You can just take that mental effect to an extreme — open a new savings account called “App Fund”, choose a sensible amount of money to put aside and transfer it over. For bonus points, make an automated monthly/weekly transfer in there. Your money hasn’t gone anywhere, it’s still safe and sound.

Then, don’t look at it for a while. Go do other things. A month or two later, when you’ve survived without that money completely without incident, come back to it. Sweet, free money!

Now you’ve followed my terrible advice (or hopefully not) and put some money aside for your project, how can it actually benefit your project?

Hire Someone Better Than You

Universal Rule #1: Time is money [citation needed], particularly if time is constrained. If you can hire someone to do a way better job as you in a fraction of the time, you should seriously consider it.

Me? I’m terrible at designing stuff. I can throw together some lines that look sort of like an arrow, but that’s it.

A few months ago, the core functionality of the app was settling down and it was time to start working on the UI properly, particularly the part of the app that dealt with photos.

My photo view was a simple grid. When you had a camera connected, it showed you the photos on the camera. When you didn’t, it showed only the ones you’d downloaded. Seems simple enough! However, in practice, it was horrible.

On the left, you see some photos you’ve downloaded. Then you switch on your camera and a couple of seconds later the view changes to the one on the right. Where’d my photos go? Well, I can see the speedboat towards the bottom there, but what about the one of the house?

I banged my head against this for a little while before throwing in the towel and taking to Twitter for help:

I got a few replies and picked two of the most interesting-looking designers to take a look at my UI. One of the two I was impressed enough with that I extended his assignment a bit, and plan to use a lot more as this project goes full-time.

Anyway, a stupidly short amount of time later, I had some great mockups that made complete sense, and I felt a bit stupid for not thinking of it before. Still! I’d spent not a crazy amount and come out with a professional designer’s take on my problem, and the walls I was bumping against crumbled away and development picked up again.

It was money well-spent, in my opinion, particularly taking the “time is money” rule into account.

Buy Tools

Universal Rule #2: You can’t build good things without good tools.

Having a little bit of money set aside lets you invest in those tools that make your life a lot easier but are difficult to justify on projects that don’t have a dedicated budget.

In particular for me, the time saved by a single feature of Sketch — the ability to automatically export to @1x, @2x and @3x with a single click is well worth the $99 it costs. Before I had a budget it was a struggle to justify when I get Photoshop as part of my $10/month Adobe Creative Cloud subscription. With a budget? No brainer.

Buy Hardware

Universal Rule #3: Always test with real hardware.

Alright, this one isn’t really that universal. However, you really must work with real hardware when building software. Emulators are crappy.

Depending on your project, you might be able to get away with an iOS device or two and be done with it. It’s really important, in my opinion, to get hold of the slowest device you support and test on that as much as you possibly can. If you run your app day-to-day on the slow thing, you’ll kinda do performance optimisation as you go, which is much nicer than working with the latest and greatest for a year, then thinking “Oh, I should test on that old one” a week before release and finding it runs at 2fps!

For my project, though, I work with cameras. For a long time, I was just using the camera that started this whole thing, my Canon EOS 6D. It works perfectly well, but there’s a few problems:

  1. I can’t really say I support “Canon SLRs” if I’ve only tested it with this one model of camera.

  2. The constant testing and debugging cycles have really take a toll on the battery.

  3. This is my personal camera, and as such I’ve spent a great deal of time setting it up to my liking. However, for testing purposes I have to screw around with it all the time, which gets annoying fast when I just want to go outside and take photos. Even worse, if my tests fill it up with photos of the wall and mess with the settings, I might end up missing a great photo opportunity.

To solve these problems, I bought a different model of camera that’s dedicated to testing. This solves all of my problems — I can screw around with it all I like and not care, the mains adapter I bought with it means I don’t need to worry about the battery, and the fact it’s a different model means I can be more confident about compatibility.

However, you do have to kind of feel sorry for the poor thing. Some cameras get to take photos of beautiful models, some get to see the Northern Lights, others record a couple’s most treasured memories on honeymoon. This one is destined to have a life chained to a desk by a power cord, taking photos of the wall.

Once Cascable ships, I should take it to somewhere beautiful to celebrate.

My Expenses

Adding it all up, I’ve spent well over USD $1,500 on Cascable and I’m still only working on it part-time. Even with this in mind, it’s been absolutely worth it in my eyes — the benefits the tools and services I’ve bought have boosted the quality of my project no end.

If you’re interested, here’s a list of my current and near-future expenses.

Item Cost (USD)
Test Camera (Canon EOS 70D) $999
Canon Mains Power Adapter $119(!)
Design Services $430
JIRA License $25
Sketch License $99
iOS Developer Membership $99

As I approach launch, the costs will start to pile up:

Item Cost (USD)
Various Hosting Costs ~$40/month
SSL certificate $100-$500
Another Test Camera ~$600
Design Services ~$1,000 - $2,000

Not including workspace costs (which I’ll discuss in a future post) or the cost of my own time (time is money!), I’m going to be several thousand dollars in the hole by the time I launch.

Once upon a time, a few mistakes ago, this number would have scared me away big time. But! If I’m not willing to invest in myself, how can I expect customers to invest their money and time into me and my app?


Next time on Secret Diary of a Side Project, we’ll talk about taking your project and taking it from whatever state it’s in right now through to launch, through careful planning and a little help from an outsider.


January 25th, 2015

Secret Diary of a Side Project: Coding Practices

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

In this post, I’m going to talk about some of the coding practices I’ve picked up over the years that really make a difference when working on projects that have a limited time budget.


There are tons of coding practices that help us be better, faster, more understandable as coders. However, although this post is pretty long, I only talk about two practices — both of which are focused on keeping projects simple to understand for people new to the project. That’s really important for a side project you intend on seeing through — because you’re working under severe time constraints, you may well go months between looking at a particular part of a project. You’ll be a newbie to your own code, and future you will love past you a lot more if past you writes a simple, easy-to-understand project.

Keep Dependencies Down, Keep Dependencies Static

Might as well get the most unpopular one out of the way first — I dislike third party dependencies, so I keep them to an absolute minimum. CocoaPods is a third party dependency to manage my third party dependencies, so I don’t use that at all.

My app has four third party dependencies, one of which isn’t included in release builds (CocoaLumberJack).

  • CocoaLumberjack
  • Flurry
  • MRProgress
  • Reactive Cocoa

The list itself isn’t important. What’s important that each item in it only got there after careful consideration of the benefits and drawbacks.

There’s a huge amount of discussion online about CocoaPods, and I’m going to ignore all of it — CocoaPods doesn’t really add much for the way I approach projects, so I don’t use it.

So, how does a dependency end up on that list?

  1. If I end up in a position where a third party library might seem useful, I figure out if I should rewrite it myself. After all, I don’t truly know how something works unless I wrote it, and if I want 100% confidence in my product, I should write it all (within reason).

  2. If I actually decide I want to use the library, I’ll find the latest stable release of it, add it to my project, and start using it.

  3. I never touch or update that dependency again unless I have a good reason to.

Point 3 in particular makes most of CocoaPod’s usefulness moot. A requirement for me to use a third party piece of code is that it’s mature and stable. If they’re updating the library frequently and I’m required to keep up with those updates to avoid problems, well, that library gets deleted and I find something else.

Using this approach, I can concentrate on making my app better rather than making sure the spiderweb of dependencies I’ve added don’t screw things up every time they get updated.

Model Code Goes In A Separate Framework Target

While preparing for this post I had a look back at my previous projects and it turns out I’ve been doing this since I started programming in Objective-C and Cocoa back in 2006, and I really love the approach.

Basically, if it doesn’t involve UI or application state, it goes in a separate framework. My iPod-scanning app Music Rescue contained a complete framework for reading an iPod’s database. My pet record-keeping app Clarus contained one to work with the application’s document format.

Even though my camera app isn’t ready yet, I have a stable, tested, fully-documented framework that cross-compiles for iOS and Mac OS X. That framework takes complete care of camera discovery, connections, messaging queues, and all that jazz.

It’s true that this actually adds more work for you, at least at the beginning. Isn’t this post supposed to be about making your life easier? Well, the long-term benefits far outweigh the extra work.

It Provides Separation Of Responsibilities

A huge benefit to this is code readability and separation of process. Suddenly, your application has a huge set of problems it just doesn’t have to care about any more. Sure, you need to care about them, but your application doesn’t. It makes the application lighter, easier to work with and that bit less complicated to understand.

It Encourages You To Future-Proof and Document APIs

This is an interesting one. Now your logic is in a completely separate target, suddenly it’s a product all of its own. It needs documentation. It needs a stable and thought-out API.


This code in the Mac Demo app hasn’t changed since 2013, even though camera discovery has been refactored at least twice in that time.

This pays dividends down the road if pulled off correctly. Designing APIs is hard — I’ve been designing public APIs for Spotify for a number of years now, so I’ve stumbled through all the terrible mistakes already. Some pointers for designing APIs that stand the test of time:

  • No non-standard patterns get exposed publicly. Sure, your task abstraction layer/KVO wrapper/functional programming constructs are amazing now, but in two years? You’ll regret exposing it publicly when you move to the new hotness. Plus, users shouldn’t need to learn your weird thing just to connect to a camera — even if that user is you in six months.

  • Document everything as you go. Header documentation is great in Xcode these days.


“How does this thing behave again?” Opt-Click “Aha!”

  • If you need to do background work, have the library completely deal with it. The client application shouldn’t have to care about it at all. A common pattern is to have public methods dispatch to a queue/thread privately managed by the library, with the aim of making the library somewhat thread-safe. If clients find themselves needing direct access to the private queue/thread, rethink your APIs so they don’t — it’s a pretty bad code smell. Always document what queue/thread callbacks come back on, or take a parameter to let the client tell you.

It Makes Quick Prototyping and Testing Crazy Easy

This is my favourite benefit of the multi-target approach, and where you really start to reel in the time savings. Making the core of the application compile for Mac OS X means I can prototype super easily.

I have a Mac target called Cascable Mac Demo. It’s a wonderful little debugging tool — it supports viewing all of the camera’s properties, taking a photo, browsing the file system and downloading files, and streaming the camera’s viewfinder image. Thanks to having a feature-complete library with a thought-out API, the entire application is less than 250 lines of code.

This little application makes building and testing new functionality a breeze. When launched, it’ll connect to the first camera it finds and sets up just enough state to allow me to add some code to test some new stuff as it’s being built.

This is a much better approach than adding some random code somewhere in the main iOS app to make sure new functionality is coming together properly, and makes sure my core functionality is mostly working and complete before it ever goes into the main app.

It Gives You Flexibility

What if I want to release a Mac version of my app one day? Well, the core functionality is already compiling, running and tested on Mac OS X. Hell, the Mac Demo app is more full-featured than some proof-of-concept apps I’ve seen!

If you want to be really flexible and are seriously considering multiple platforms, write your core framework in something cross-platform, like C# (or C++ if you hate yourself). The benefits of a constant, mature, tested library across all of your platforms will pay dividends.


Cascable in C#? Why not?


Next time on Secret Diary of a Side Project, we’ll talk about one of the most difficult things in this whole process: cold, hard, cash money.


January 12th, 2015

Secret Diary of a Side Project: Perception Shift

Secret Diary of a Side Project is a series of posts documenting my journey as I take an app from side project to a full-fledged for-pay product. You can find the introduction to this series of posts here.

In this post, I’m going to talk about some of the mental shifts you need to do when taking the leap from side project to real product.


That pivotal moment. It’s weird. You’ve been looking at this project for months, maybe years, slowly adding code here and there. Features slowly form, and the bugs get less “Oh, don’t push that button — it’ll crash” and more “Yeah, I need to make sure those buttons line up”.

Then, one day, you’ll use it for real. In my case, it was in a photography studio learning about portrait lighting. I was taking a break and wanted to look at some of the pictures I’d taken. I happened to have my iPad with me, so I fired up whatever build of the app I had on there and used it to download some photos from my camera. Then, we passed my iPad around and discussed the merits of my photos and how to improve them.

I read or heard or saw a piece by Wil Shipley a while ago (long enough for the memory to have faded a bit, so apologies if I’m mis-quoting him), where he said he takes “Is that all it does?!” as a compliment, as it means that you’ve made something that ‘clicks’ with the user so well that they can just get on with their task without all the stupid computer things getting in the way.

I had that big-style on that afternoon in the studio. At that point, the app was around 15,000 lines of code, which isn’t tiny to say the least — the main breadwinner from my previous company was 18,000. From a programming point of view what I did is pretty complex task — detecting the camera on the network using their stupid thing that isn’t Bonjour, connecting and handshaking with it, dealing with its finicky protocol, downloading a stupid Canon RAW file to the iPad, converting it to JPG and allowing the user to view it fullscreen.


Downloading photos from a WiFi-enabled SLR in an early build of my app.

My app distilled that down to two taps. Three, if you count launching it in the first place. And in this busy place with lots of people wanting to see my picture, it worked flawlessly. I launched the app and it found and connected to my camera and presented the photos within. “Can I see that one?” someone said. I tapped the “Download” icon, watched a progress bar for a few seconds, then tapped the thumbnail that appeared in the “On this Device” pane to view it fullscreen. “Cool!” the guy said “…but I think if you moved the light down a bit, you’d make that shadow softer, which I think would make the photo look nicer.”

At that moment, I felt so proud. This is why I make software. The process was so smooth that it completely melted away into the background, to the point where nobody else even noticed the app. Instead of spending time using my app, we spent time learning about studio lighting.


Evidently I missed the “Try not to look like a scruffy hobo” preparation step of going to a studio.

That day, my project shed its “side project” status and became a thing I knew could be genuinely useful to people. I’d been idly considering it for a long time, but I knew that I should try to finish this and ship it.


Once this big perception shift has happened, everything changes. It’s really important to take this perception shift seriously and be strict with yourself — if you carry on as before, your side project will stay a side project and you’ll end up shipping something not up-to-scratch.

Before we continue, let’s take a moment to define what a side project actually is:

  1. The primary attribute that all the others derive from is that it’s a project written in your spare time, which gives extreme time limitations.
  2. Features are typically written on the path of least resistance, throwing best practices out of the window in the name of getting a feature done in the hour you have before you go to bed.
  3. Unit tests? Nah. Error checking? // TODO: Handle this properly

Obviously, these behaviours need to change. So, how do we manage the transition?

Keep Refactoring Strict

As soon as you decide to turn a side project into a real product, it’s very tempting to go back and refactor everything immediately. Rebuild the code! Undo all the shortcuts! Add tests! But, if it works… don’t. Not yet.

You need to be really strict about rebuilding if you ever want to ship the thing. The rule of thumb I follow is: If I ship a build to testers with no changes except this refactor, will they notice? Sometimes the answer will be “yes”, and if that’s really the case then go ahead. However, if the answer is “no”, think hard before touching that code.

An example of this is the library that actually speaks to the camera in my app. It has three layers — the outer layer defines a very generic API for the application to consume. “Take a picture please”, “List the files on the memory card”, etc. The middle layer defines slightly lower-down APIs, but still fairly generic. “Send this command with these parameters, then give me the response here”-type stuff. Basically RPC. Finally, the very-internal layer actually deals with the grunt work of encoding the binary messages properly, handling responses and dealing with the upkeep of keeping the camera happy.

These three layers work in concert so the application doesn’t have to care about the workings of the camera at all. I’m actually pretty pleased with the design of most of it.

However, that inner layer. The most important one. The most intricate one, dealing with TCP sockets and threads and all that fun stuff. It’s terrible from a code-style point-of-view.

However! It works, and it works well. The last commit on that class was in June 2014, adding a minor feature. Before that, I last touched the code in July 2013!


I must’ve been severely sleep-deprived when I wrote this glorious mess. But, it works, so it stays.

As an engineer, I would love to refactor that class. I could redesign the shit out of it — it’d be a work of art! However, by now that class must have handled over a million commands to and from cameras in my care, and it just plain works. That code is over a year and a half old, and now this project is a real product with a release looming in the not-too-distant future, it just doesn’t make sense to mess with it.

…and Do It Right When You Do

Alright, so it’s time to refactor some code from the side project era. Make sure you do it right — design it properly, add tests, etc. Replacing some old crappy code with slightly less crappy code that hasn’t been thought through or tested properly isn’t worth the effort.

A Commit A Day…

I got this tip from Simon Wolf: do something on your project every single day, no matter how big or small. I started doing this and bar my honeymoon, I’ve kept up with it pretty well. Sometimes (like today) all I do is tweak the in-progress website a bit. Yesterday I moved my icons from Photoshop to Sketch. The day before, I spent nearly 11 hours working on an advanced feature.

Every time you touch your project in any way, unless you’re doing it very wrong, it’s getting better. It might only be a tiny bit better, but it’s getting better.

Over a year’s worth of tiny bit betters and my project is pretty damn mature.

Keep Away From New Shiny

Remember: This is a real product now. If you bring in every shiny new library or even language that comes along, you’ll end up spending hours working and not moving forward at best, and end up with a half-working project that has hundreds of dependencies at worst.

This Is Starting To Feel Like Work!

These restrictions take away a lot of the fun things about side projects. This can be a little discouraging at first, but stick with it. You’ll (hopefully) start to find joy in the results of your work rather than the process of making it. Sure, mucking around with autolayout for hours is a pain in the ass, but just look at that UI! It’s glorious!


Next time on Secret Diary of a Side Project, we’ll talk about some actual software development practices and idioms that’ve helped me. There’ll be code and everything!


January 10th, 2015

Compile-Time Image Name Checking

In my continued quest to banish as many string literals as I possibly can from my project, I wrote a little tool to generate a header file containing string constants for the names of all the images in my .xcassets bundle.

This means you get compile-time checking of two things — you misspelling the image name, and images being named incorrectly.

The tool is very simple, and is mostly a duplicate of the generate-string-symbols tool I wrote a while ago. It runs through your .xcassets directory and writes the found image names out to a header file. You can see the source over on GitHub.

For consistency’s sake, you can pass -prefix and -suffix parameters to the tool, which do what they say on the tin. A prefix of CBL and a suffix of ImageName will convert and image named Awesome into CBLAwesomeImageName, with a value of @"Awesome".

Putting It All Together

To integrate this into your project, there are three steps:

  1. Generating the header files when your project builds.
  2. Telling Xcode where to find the generated files at build time.
  3. Importing the generated header files so you can use them.

First, you want to create a custom build step in Xcode before the Compile Sources build step to generate header files from your image assets. My custom build step looks like this:

"$PROJECT_DIR/Vendor/generate-imageasset-symbols/generate-imageasset-symbols" 
    -assets "$PROJECT_DIR/Cascable/Images.xcassets"
    -out "$BUILT_PRODUCTS_DIR/include/CBLImageNames.h"
    -prefix CBL
    -suffix ImageName

It uses /bin/sh as its shell, and I have the generate-imageasset-symbols binary in the Vendor/generate-imageasset-symbols directory of my project. It places the generated header file in the include directory of the build directory.

Next, you need to tell Xcode where to search for your files. Make sure your project’s Header Search Paths setting contains $BUILD_PRODUCTS_DIR/include.

At this point, you can start using the symbols in your project. However, you’ll need to #import your generated header files(s) in each file you want to use localised strings in.

To get around this, can #import them in your project’s prefix header file.

#import <CBLImageNames.h> // Generated from image assets

UIImage *image = [UIImage imageNamed:CBLAwesomeImageName];

…and you’re done! You can find the generate-imageasset-symbols project over on GitHub under a BSD license. Enjoy!


January 8th, 2015

The Antisocial Network

I have a love-hate relationship with Facebook. I opened my first account after my girlfriend (since fiancée, now wife) wanted to tag me in stuff she did. I survived maybe a couple of years before getting bored and deleting my account.

I opened a new account in 2010 to organise a leaving party when we moved to Sweden. I’ve kept that account to this day.

Starting on a positive note, I enjoy that 99% of my friends are on Facebook since it makes organising events super easy. Their Messenger app isn’t that bad either.

However, my Facebook timeline is absolute trash. I wrote about it a little in my Mental Health post last year, and it’s summed up beautifully in the short (~2:30) film What’s on your mind?, but the lives I’m seeing in tiny, out of context snippets through my Facebook timeline are completely false. Hell, I’m guilty of it myself.

The thing is, I know the people in my Facebook timeline are nice people. Sure, I unsubscribed from some friends long ago since they were just spewing out nothing but “funny” pictures and KEEP CALM AND _____ ON slogans, but by-and-large my friends are lovely people. I wouldn’t be friends with them otherwise!

Recently it was getting to the point where I was starting to dislike people I know I really like, even childhood friends I’ve known for decades.

“Why don’t you just unfollow them?”, my wife asked.

So I did. I unfollowed every single person I’m friends with. My Facebook timeline is now completely empty.


Bliss.

The obvious question is “Why don’t you just delete your account?”. Well, since 99% of my friends are on there, I can still chat with them directly and organise events with them, and Facebook is a decent tool for that.

I’m hoping that not receiving a constant stream of tiny snippets will make me think “I wonder what x is up to these days?” sometimes, ending up contacting that person “properly” via some sort of two-way direct chat (IM/phone/whatever). I haven’t spoken to some of these people for over a year, but Facebook has tricked me into thinking that we’re in contact because I see their face on a timeline once in a while, and they see my face in theirs.

Somehow, a social network has made me an unsocial friend.