5 rules of continuous delivery

Inspired by Sandi Metz’s BaRuCo 2013 presentation “Rules” (which you should watch if you haven’t yet) I started thinking about whether there were some  rules that might be useful in the continuous delivery domain to “screen for cooperative types”.

I came up with these as a starting point:

Check in everything – we’re used to putting source code in version control, but we’re often less good at configuration management. Do you control your test data? Your database scripts? Your operating system patch level? If not how can you be sure that the environment you provision tomorrow will be identical to the one you provisioned last week?
Automate everything – “to err is human”, as the saying goes. Any manual process is susceptible to failures, so get rid of as many as you can. Some continuous delivery pipelines still have manual approval steps in them as part of the process, but the presence of the human is not functionally essential.
Continuous != occasionally – the more we do something, the easier it gets. One of the reasons to do something continuously is to decrease the cost and remove the fear. If it “costs too much” to deploy often, then work on reducing the cost not reducing the frequency.
Collaborate – people are not plug compatible. To get the most from the different people in the organisation we need to work together. For me, this was one of the major “innovations” of devops – no more ‘us’ and ‘them’, just ‘we’.
One step at a time – it’s hard to do everything all at once, which is why we iterate in software development. Continuous delivery is no different – don’t expect everything to “just happen”.

Do these make sense to you? What essential ingredients have […]

By |February 28th, 2014|Agile, Continuous Delivery, Practices|3 Comments

Continuous delivery – the novel

I find myself recommending the same books over and over again. When speaking to techies I invariably recommend GOOS; when speaking to managers The Mythical Man Month or Waltzing With Bears. Over the past year or two, I’ve also pointed a lot of organisations at Continuous Delivery by Jez Humble and Dave Farley. It’s an important book, but I think it could have been shorter, and that’s an important consideration for the target audience. If you consider the other books I recommend that weigh in at 384, 336 and 196 pages respectively, Continuous Delivery extends to 512 and it feels longer. That’s not because it isn’t good – it is – but because it is detailed and quite dense.

Last week I finally heeded Liz Keogh’s advice and read The Phoenix Project (“a novel about IT, Devops and helping your business win”). In terms of prose style, it doesn’t compete with Liz’s own efforts, but it is very readable and does a great job of getting some quite tricky concepts across (Lean, Theory of Constraints, The Three Ways). The authors acknowledge their debt to Goldratt’s The Goal, and indeed they are ploughing the same furrow, but in the field of software. Amazon says the print copy is 343 pages long, but I read it on the Kindle and it felt shorter than that.

The reason I’m adding this book to my recommended list isn’t just because it’s short and readable. It’s because it makes some very frightening concepts very easy to digest. I didn’t know how to explain quite why I liked it so much until I found myself reading a Venkat Rao post this morning, where he describes how we change our minds:

You have to:

1. Learn new […]

By |February 24th, 2014|Agile, Practices, Systems|1 Comment

Teaching TDD (TTDD)

There has been a flurry of discussion about how to teach TDD, sparked off by a recent post from Justin Searls. In it he lists a number of failures that range from “Encouraging costly Extract refactors” to “Making a mess with mocks” all of which distract attention from the concept that “TDD’s primary benefit is to improve the design of our code”. He concludes by suggesting that once you have written a failing test, rather than get-to-green in the simplest way possible you should “intentionally defer writing any implementation logic! Instead, break down the problem by dreaming up all of the objects you wish you had at your disposal”. In essence, design the elements of the solution while the first test is still red.

It’s an interesting post that raises a number of issues, but for me its value lies chiefly in opening the subject up for debate. The introduction is particularly pertinent – just setting a class a bundle of katas to do does not, of itself, encourage learning. The pains experienced while doing the exercise need to be teased out, discussed and have alternative approaches described. If you don’t hear the penny drop, then it hasn’t dropped.

Pitching in with characteristic vigour and brimstone came Uncle Bob with a robust rebuttal containing both heat and light (though some have been put off by the heat and never got to the light). Bob makes some good points regarding the fallacy of writing tests around extracted classes, the tool support for extract refactoring and the central place of refactoring in the Red-Green-Refactor cycle.

By the conclusion, however, Bob has switched tack. He states that while refactorings are cheap within architectural boundaries, they are expensive across them. Whether he’s right or wrong […]

By |February 4th, 2014|Practices, TDD, Unit testing|2 Comments

Eat your own dogma food

The software development community experiences fad after fad. Consultants and thought leaders dream up new methodologies; old practices are relabelled and promoted as the next big thing; flame wars are fought over names, tabs and brace position.

One of the few practices that has stood the test of time is that of “eating your own dog food”, which essentially means that you’ll be the first user of any software that you’re developing. In more polite (and optimistic) circles this is also known as “drinking your own champagne”. When it comes to development practices, I think we need to adopt the same approach.

If you’re going to make dogmatic statements about how software should be developed, then you as a developer should be prepared to stick to them yourself. No more  “do as I say, not as I do”. It’s time to eat your own dogma food.

By |January 14th, 2014|Agile, Practices|0 Comments

TDD at interviews

Allan Kelly posted an article on DZone this week predicting that TDD would be a required skill for developers by 2022. Vishal Biyani asked on Twitter about how one might test TDD skills, and I promised to blog about my experience of using Cyber-Dojo in interview situations.

Cyber-Dojo is a browser-based dojo environment developed by Jon Jagger that supports a lot of programming languages and xDD frameworks. It’s great for dojos because it has few of the productivity frills that we’ve come to depend on over the years – no syntax highlighting; no autocompletion; no suggested fixes. That means we have to think about what we’re doing, rather than relying on muscle memory.

As Jon eloquently puts it in the FAQ: “Listen. Stop trying to go faster, start trying to go slower. Don’t think about finishing, think about improving. Think about practising as a team. That’s what cyber-dojo is built for.”

That might be what cyber-dojo was built for, but it turns out that it’s also excellent as an interview environment. Your interviewee writes real code and has to diagnose with real compiler/runtime errors. They’ll have to use a browser to remind themselves of all the basic knowledge that has atrophied during years of nanny-IDE development. And, best of all, there’s no save, build or run functionality provided by cyber-dojo. There’s only a single button and it’s labelled nice and clear: TEST.

Use one of the katas whose instructions have been helpfully included with cyber-dojo, or roll one of your own, and see how your interviewee responds. Every time they press the TEST button, all the code they’ve written is sent over to the server to be built & run and the response is returned, along with a traffic light: green for “all tests passed”, red for […]

By |January 11th, 2014|Practices, TDD, Unit testing|1 Comment

When is a tester not a tester?

No, I’m not trawling through my xmas cracker jokes. I was looking through the programme for DevWeek 2014 and both my sessions are tagged as “Test”. This is following a pattern started at ScanDev last year and followed by several other conferences at home and abroad.

Why am I bothered? It’s not that I mind being associated with testing at all. I don’t think of testers as a lower form of life. I *love* testers. It’s for the same reason that Dan North and Chris Matts started using the “should” word instead of the “test” word all those years ago – developers think that the test track is not for them.

Both my sessions at DevWeek are about types of testing that developers should be doing routinely. “So long, and thanks for all the tests” explores what makes a test valuable and what practices developers should consider adopting. “Mutation testing – better code by making bugs” is an alternative to code meaningless coverage metrics that can help developers ensure they’re sticking to their definition of done.

Q. When is a tester not a tester?
A. When they’re a developer.

You’re right. It’s not funny. So, it’s ideal for a cracker.

By |January 9th, 2014|Practices, Unit testing|0 Comments

Aslak’s view of BDD, Cucumber and automated testing

This is a quote from Aslak Hellesoy on the Cukes Google group.

“Even on this list, the majority of people seem to think that Cucumber == Automated Tests == BDD, which is WRONG.

What people need to understand is:

Cucumber is a tool for BDD
Cucumber is a tool for Specification By Example
Specification By Example is just a better name for BDD
Specification By Example / BDD means examples (Scenarios) are written *before* implementation
Specification By Example should happen iteratively, in collaboration with non-technical stakeholders
Automated Tests are a by-product of Specification By Example
Writing Automated Tests does *not* imply you’re doing Specification By Example
Using Cucumber for Automated Tests without doing Specification By Example is stupid
Cucumber is not a tool for Automated Testing, it’s a tool for Collaborative, Executable Specifications”

Aslak Hellesoy    – 12th December 2013
Cukes Google Group!topic/cukes/XFB7CjWuI14

By |December 14th, 2013|Agile, Cucumber|0 Comments

The context and definition challenge

We’re very good at rationalising. Almost any statement can be justified by the retroactive application of the twin constraints of “context” and “definition.”

As an example, Chris Matts (@papachrismatts) talked about the “death of Agile” in a recent blog post of his, and I took issue with that. We talked about it briefly at a couple of conferences and he explained why it made sense to him:

– context: “Agile” as a set of recipes, not values (c.f. Scrum, SAFe, DAD and accompanying certifications)
– definition: “Dead” means devalued through repeatedly over-promising and under-delivering

I still don’t think that agile has died, and neither does Chris in the general sense, but given the specific circumstances of his post the statement makes sense. But it took me time and effort to gain that understanding – time and effort that someone looking for a reference to support their view might not invest.

Neil Killick (@neil_killick) makes a good point that we often use controversy to stimulate debate, so should we care that our words can be misinterpreted, or quoted out of context? I think the answer is sometimes. Influential members of any community should consider carefully how the constituency that they are addressing might (mis)interpret their statements. No matter how much you may hope that people will think for themselves, the pronouncements of “thought leaders” carry a weight that cannot be ignored.

Misinterpretation of the written word is all too common, however. The “Three Amigos” meeting at the heart of Behaviour Driven Development (BDD) emphasises the need to have frequent, open, high bandwidth collaboration between technical and non-technical participants for just this reason. The differing perspectives of the participants challenge the implicit assumptions of the others.

If you make strong assertions in your tweets, […]

By |December 3rd, 2013|Agile, doa|0 Comments

The Beer Belly testing anti-pattern

The ‘Testing Pyramid’ is often trotted out to illustrate a suggested distribution of tests. More small “unit” tests; less deep “end-to-end” tests. And various people have observed common anti-patterns, specifically the Ice Cream Cone, where there are lots of end-2-end tests and hardly any unit tests.


The anti-pattern that I see most often is the Beer Belly. This stems from a misunderstanding of what constitutes a unit test. It’s proven difficult to define exactly what a unit test is, but Mike Feathers described what ISN’T a unit test – notably tests that hit the network, the file system or the database. Most developers don’t seem to have taken this on board and write many tests that rely on some (or all) of these external components.

Combine this tendency to write integration tests instead of unit tests with a whole pile of manual test scripts and you get the classic beer belly topped with a big, cloudy head of woe.

And you know the best way to cure a hangover. Hair of the dog, anyone?

By |November 21st, 2013|Practices|0 Comments

Tax or investment – which do you prefer?

I was working with a client last week who were trying to fit some new technical practices into their daily routine. The way they were trying to ‘account’ for this in their iteration planning was by introducing a 10% ‘tax’ on their velocity. In other words, they were reducing the number of story points that they would accept into the iteration to make up for the time spent getting better at TDD (for example).

Quite apart from whether you think this is a good way to go about it (I don’t) there’s a real issue of terminology. Maybe taxes are levied for the good of the nation, but very few people feel ecstatic about paying them. Taxes ensure you keep getting what you already take for granted.

This team was trying to improve their work, so that they could deliver better value. By calling this a tax I think they were setting the wrong tone for discussions with their customers and management. Far better to describe it, accurately, as an investment. An investment in the staff, the organisation and the product. An investment that would produce returns.

Phil Karlton famously said: “There are only two hard things in Computer Science: cache invalidation and naming things.” It turns out it’s not just in computer science that it’s hard. Who knew?

Do you have any great examples of poorly chosen names, that give people the wrong impression of something?

By |October 24th, 2013|Agile|1 Comment