bdd

/Tag:bdd

Unit tests are your specification

Recently a Schalk Cronjé forwarded me a tweet from Joshua Lewis about some unit tests he’d written.

. @ysb33r I’m interested in your opinion on how expressive these tests are as documentation https://t.co/YpB1A3snUV /@DeveloperUG

— Joshua Lewis (@joshilewis) December 3, 2015

I took a quick look and thought I may as well turn my comments into a blog post. You can see the full code on github.
Comment 1 – what a lot of member variables
Why would we use member variables in a test fixture? The fixture is recreated before each test, so it’s not to communicate between the tests (thankfully).

In this case it’s because there’s a lot of code in the setup() method (see comment 2) that initialises them, so that they can be used by the actual tests.

At least it’s well laid out, with comments and everything. If you like comments – and guess what – I don’t. And they are wrapped in a #region so we don’t even have to look at them, if our IDE understands it properly.
Comment 2 – what a big setup() you have
I admit it, I don’t like setup()s – they move important information out of the test, damaging locality of reference, and forcing me to either remember what setup happened (and my memory is not good) or to keep scrolling to the top of the page. Of course I could use a fancy split screen IDE and keep the setup() method in view too, but that just seems messy.

Why is there so much in this setup()? Is it all really necessary? For every test? Looking at the method, it’s hard to tell. I guess we’ll find out.
Comment 3 – AcceptConnectionForUnconnectedUsersWithNoPendingRequestsShouldSucceed
Ok, so I’m an apostate – I prefer test names to be snake_case. I just find it so much easier […]

By |December 5th, 2015|Agile, BDD, TDD, Unit testing|9 Comments

Rolling Rocks Downhill

It’s almost a year since I posted a glowing review of “The Phoenix Project” – a business novel, following in the footsteps of Goldratt’s “The Goal”, about continuous delivery. If you haven’t yet read it, then I’m going to recommend that you hold fire, and read “Rolling Rocks Downhill” by Clarke Ching instead.

I should point out that Clarke is a personal friend of mine and a fellow resident of Edinburgh, so I may be a little biased. But I don’t think that this is why I feel this way – it’s just that Clarke’s book feels more real. Both “Rolling rocks….” and “Phoenix…” have a common ancestor – “The Goal”; both apply the Theory of Constraints to an IT environment; and both lead eventually to triumph for the team and the lead protagonist.

There’s something decidedly european about Clarke’s book, which makes me feel more comfortable. The reactions of the characters were less cheesy and I really felt like people I know would behave in the manner described. (Maybe that’s because these characters really are based upon people I know ;)) There really are cultural differences between Europe and North America, and this book throws them into sharp relief. The inclusion of the familiar problems caused by repeated acquisitions and a distributed organisation only added to the feeling that this story actually described the real world, rather than an imaginary universe powered by wishful thinking. The only time that my patience wore thin was around the interventions of the “flowmaster”, Rob Lally, who is ironically a real person.

A truly useful addition in Clarke’s novel is the description of the Evaporating Cloud diagram, which (I now know) is one of the 6 thinking processes from the theory of constraints. For […]

By |February 12th, 2015|Agile, Continuous Delivery, Practices|1 Comment

Reduce, Reuse, Recycle

From http://kids.niehs.nih.gov/explore/reduce/:
Three great ways YOU can eliminate waste and protect your environment!

Waste, and how we choose to handle it, affects our world’s environment—that’s YOUR environment. The environment is everything around you including the air, water, land, plants, and man-made things. And since by now you probably know that you need a healthy environment for your own health and happiness, you can understand why effective waste management is so important to YOU and everyone else. The waste we create has to be carefully controlled to be sure that it does not harm your environment and your health.
What exactly is “waste?”
Here’s a list (not exhaustive, by any means):

Code that doesn’t get used
Log entries that give no information
Comments that echo the code
Tests that you don’t understand
Continuous integration builds that aren’t consistently green
Estimates that you aren’t confident in

How can you help?
You can help by learning about and practicing the three R’s of waste management: Reduce, reuse, and recycle!

Reduce – do less. Don’t do something because that’s what you’ve always done. Do it because it adds value; because it helps you or a colleague or a customer; because it’s worth it.
Reuse – repetition kills. Maybe not today or tomorrow, but eventually. Death by a thousand cuts. Eliminate pointless repetition,  and go DRY (Don’t Repeat Yourself) for January. You’ll enjoy it so much, you’ll never go back.
Recycle – is there nothing to salvage? Really? When you find yourself building that same widget from scratch again, ask yourself “why didn’t I recycle the last version?” Was it too many implicit dependencies? Or was the granularity of your components not fine enough? Try building cohesive, decoupled components. Practice doing it – it gets easier.

By |January 12th, 2015|Agile, Cyber-Dojo, Musings, Practices|0 Comments

Half a glass

Is this glass half empty or half full?

There’s normally more than one way to interpret a situation, but we often forget that the situation itself may be under our control.

I often find my clients have backed themselves into a corner by accepting an overly restrictive understanding of what they’re trying to achieve. They will tell me, for example, that this story is too big BUT that it can’t be split, because then it wouldn’t deliver value. Or, they might be saying that there’s no point writing unit tests BECAUSE the architecture doesn’t support it. Or even, it’s not worth fixing this flickering test, because it ALWAYS works when they run it locally.

I encourage them to rethink the situation. A slice through that story may not deliver the final solution the customer needs, but it will act as a small increment towards the solution. Minor refactorings are usually enough to make a section of the code independently testable. That flickering test is undermining any confidence there might be in the test suite – fix it or delete it.

Don’t try to put a spin on a half-full glass – rethink the glass.

Instead of asking if the glass is half-full or half-empty, use half a glass.

(It turns out that wittier minds than mine have already applied themselves to this proverb – here’s a bumper collection.)

By |December 15th, 2014|Agile, Musings, Practices|0 Comments

Diamond recycling (and painting yourself into a corner)

The post I wrote recently on recycling tests in TDD got quite a few responses. I’m going to take this opportunity to respond to some of the points that got raised.
Do we really need to use the term “recycling”?
The TDD cycle as popularly taught includes the instruction to “write a failing test”. The point of my article was to observe that there are two ways to do that:

write a new test that fails
change an existing, passing test to make it fail

It’s this second approach that I’m calling “recycling”. Alistair Cockburn says that “it’s a mystery this should need a name” and it probably doesn’t. However, I’ve regularly seen novice TDD-ers get into a mess when making the current test pass causes other test(s) to fail. Their safety net is compromised and they have a few options, none of which seem very appealing:

Roll back to last green
Comment out the failing test(s)
Modify the failing test(s) to make them pass again

Whichever way you go to get out, you’ll want to try to avoid painting yourself into a similar corner in future.
Why do tests that used to pass start failing?
Ron Jeffries suggests that this will only happen if the tests don’t “say something universally true about the problem and solution.” Several people (including George Dinwiddie and Sandro Mancuso) demonstrated that this problem can be solved by writing a series of tests that each say something “universally true.” However, to me, this seems like a similar approach to that recommended by Alistair Cockburn in his “Thinking Before Programming” post.

I’m a big fan of thinking before programming. In the courses that I deliver, I routinely prevent students from touching the keyboard until they’ve thought their way around the problem. But, it’s just not realistic to expect that […]

By |December 9th, 2014|Agile, BDD, Cyber-Dojo, TDD, Uncategorized|3 Comments

Recycling tests in TDD

The standard way that TDD is described is as Red-Green-Refactor:

Red: write a failing test
Green: get it to pass as quickly as possible
Refactor: improve the design, using the tests as a safety net
Repeat

TL;DR; I’ve found that step 1) might be better expressed as:

Red: write a failing test, or make an existing test fail

Print Diamond
One of the katas that I use in my TDD training is “Print Diamond”. The problem statement is quite simple:
Given a letter, print a diamond starting with ‘A’ with the supplied letter at the widest point.

For example: print-diamond ‘C’ prints

  A
 B B
C   C
 B B
  A
I’ve used Cyber-Dojo to demonstrate two different approaches so you can follow along with my example, but I recommend you try this kata on your own before reading further. .
Gorilla
The usual approach is to start with a test for the simple case where the diamond consists of just a single ‘A’:
> PrintDiamond(‘A’) 

A
The next test is usually for a proper diamond consisting of ‘A’ and ‘B’:
> PrintDiamond(‘B’)

 A
B B
 A
It’s easy enough to get this to pass by hardcoding the result. Then we move on to the letter ‘C’:
> PrintDiamond(‘C’)

  A
 B B
C   C
 B B
  A
 

The code is now screaming for us to refactor it, but to keep all the tests passing most people try to solve the entire problem at once. That’s hard, because we’ll need to cope with multiple lines, varying indentation, and repeated characters with a varying number of spaces between them.
Moose
The approach that I’ve been playing with is to start as usual, with the simplest case:
> PrintDiamond(‘A’) 

A
 

For the second test, however, we start by decomposing the diamond problem into […]

By |November 23rd, 2014|BDD, Cyber-Dojo, Practices, TDD|12 Comments

Using SpecFlow on Mono from the command line

SpecFlow is the open source port of Cucumber for folk developing under .NET. It has been compatible with Mono (the open source, cross platform implementation of the .NET framework) for several years, but most of the documentation talks about using it from within the MonoDevelop IDE. I wanted to offer SpecFlow as one of the options in Cyber-Dojo and, since the Cyber-Dojo IDE is your browser, I was looking for a way to make it all happen from the command line.

Cyber-Dojo already offers C#/NUnit as an option, so I used this as my starting point for making SpecFlow available. I came up with a list of tasks:

Install SpecFlow
Generate ‘code-behind’ each feature file
Include generated code in compilation

Install SpecFlow
I found an interesting website that had detailed instructions for installing SpecFlow on Mono. There don’t seem to be any handy ‘apt-get’ packages, so it is basically a process of downloading the binaries and installing them in the GAC, for example:

  gacutil -i TechTalk.SpecFlow.dll
Generate ‘code-behind’ each feature file
SpecFlow ships with a command line utility, specflow.exe. I tried following the instructions from the article that had helped me with the installation, but they didn’t work for me. I ended up invoking the utility directly:

  mono ./specflow.exe

Running specflow.exe like this lists the operations that the utility provides, from which I chose ‘generateall’, because (according to the documentation) it should do exactly what I want:
re-generate all outdated unit test classes based on the feature file.
Unfortunately it needs a Visual Studio project file (csproj) as input, and since we’re not using Visual Studio we don’t have one. This led me to insert a fourth item in my task list: “Create csproj file”
Create csproj file
I started with an MSDN article that describes creating a minimal […]

By |October 5th, 2014|BDD, Cyber-Dojo, Practices, TDD, Unit testing|1 Comment

To TDD or not to TDD? That is not the question.

Over the past few days a TDD debate has been raging (again) in the blog-o-sphere and on Twitter. A lot of big names have been making bold statements and setting out arguments, of both the carefully constructed and the rhetorically inflammatory variety. I’m not going to revisit those arguments – go read the relevant posts, which I have collected in a handy timeline at the end of this post.
Everyone is right
Instead of joining in the argument, I want to consider a conciliatory post by Cory House entitled “The TDD Divide: Everyone is right.” He proposes an explanation for these diametrically opposed views, based upon where you are in the software development eco-system:
Software “coaches” like Uncle Bob believe strongly in TDD and software craftsmanship because that’s their business. Software salespeople like Joel Spolsky, Jeff Atwood, and DHH believe in pragmatism and “good enough” because their goal isn’t perfection. It’s profit.
This is a helpful observation to make. We work in different contexts and these affect our behaviour and colour our perceptions. But I don’t believe this is the root cause of the disagreement. So what is?
How skilled are you?
In Japanese martial arts they follow an age old tradition known as Shu Ha Ri, which is a concept that describes the stages of learning to mastery. This roughly translates as “first learn, then detach, and finally transcend.” (I don’t want to overload you with Japanese philosophy, but if you are interested, please take a look at Endo Shihan’s short explanation)

This approach has been confirmed, and expanded on, in modern times by research conducted by Stuart and Hubert Dreyfus, which led to a paper published in 1980. There’s a lot of detail in their paper, but this diagram shows the main thrust […]

By |May 2nd, 2014|Agile, TDD, Unit testing|3 Comments

The most important change you can make to help your team succeed

How busy are you?
Are you close to a deadline? Is the team feeling pressure? Every team I visit seems to be under the same heavy workload, and consequently has a lot of improvements to the process, the environment, the technology that they can’t quite get to yet. They know they need to get to them, and they will…. when they have time.

But they won’t get time.

Ever.

TL;DR: Build slack into your iteration

The nature of time
When I started trying to write a book I spoke to a colleague, Jon Jagger. I whined that I just didn’t seem to be finding the time and he replied with wise words: “You won’t find the time. You’ve got to make the time.”

The same is true for any tasks that support the development team that don’t obviously deliver immediate working software for the customer. The reasons for this are many, but the root cause is normally disempowered teams. Teams who ask their product owner to prioritise tasks that the product owner has no way of prioritising appropriately. Most POs should not be asked to choose between a user story that delivers a feature they understand and a technical task that they don’t. It’s a no-brainer – they’ll always choose the user story to maximise customer value. Even if the technical task will actually deliver greater customer value.

Before I continue, I’d like to remind you of two things: sustainable pace and commitment
Sustainable pace
The agile manifesto is quite restrained in its pronouncements. One of the 12 principles it states is this: “Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.” This doesn’t mean we don’t work hard. It doesn’t mean we never come in early, work […]

By |April 30th, 2014|Agile, Practices|2 Comments

5 rules of continuous delivery

Inspired by Sandi Metz’s BaRuCo 2013 presentation “Rules” (which you should watch if you haven’t yet) I started thinking about whether there were some  rules that might be useful in the continuous delivery domain to “screen for cooperative types”.

I came up with these as a starting point:

Check in everything – we’re used to putting source code in version control, but we’re often less good at configuration management. Do you control your test data? Your database scripts? Your operating system patch level? If not how can you be sure that the environment you provision tomorrow will be identical to the one you provisioned last week?
Automate everything – “to err is human”, as the saying goes. Any manual process is susceptible to failures, so get rid of as many as you can. Some continuous delivery pipelines still have manual approval steps in them as part of the process, but the presence of the human is not functionally essential.
Continuous != occasionally – the more we do something, the easier it gets. One of the reasons to do something continuously is to decrease the cost and remove the fear. If it “costs too much” to deploy often, then work on reducing the cost not reducing the frequency.
Collaborate – people are not plug compatible. To get the most from the different people in the organisation we need to work together. For me, this was one of the major “innovations” of devops – no more ‘us’ and ‘them’, just ‘we’.
One step at a time – it’s hard to do everything all at once, which is why we iterate in software development. Continuous delivery is no different – don’t expect everything to “just happen”.

Do these make sense to you? What essential ingredients have […]

By |February 28th, 2014|Agile, Continuous Delivery, Practices|3 Comments