craftmanship

/Tag:craftmanship

Making a meal of architectural alignment and the test-induced-design-damage fallacy

Starter
A few days ago Simon Brown posted a thoughtful piece called “Package by component and architecturally-aligned testing.” The first part of the post discusses the tensions between the common packaging approaches package-by-layer and package-by-feature. His conclusion, that neither is the right answer, is supported by a quote from Jason Gorman (that expresses the essence of thought over dogma):
The real skill is finding the right balance, and creating packages that make stuff easier to find but are as cohesive and loosely coupled as you can make them at the same time
Simon then introduces an approach that he calls package-by-component, where he describes a component as:
a combination of the business and data access logic related to a specific thing (e.g. domain concept, bounded context, etc)
By giving every component a public interface and package-protected implementation, any feature that needs to access data related to that component is forced to go through the public interface of the component that ‘owns’ the data. No direct access to the data access layer is allowed. This is a huge improvement over the frequent spaghetti-and-meatball approach to encapsulation of the data layer. I like this architectural approach. It makes things simpler and safer. But Simon draws another implication from it:
how we mock-out the data access code to create quick-running “unit tests”? The short answer is don’t bother, unless you really need to.
I tweeted that I couldn’t agree with this, and Simon responded:
This is a topic that polarises people and I’m still not sure why
Main course
I’m going to invoke the rule of 3 to try and lay out why I disagree with Simon.
Fast feedback
The main benefit of automated tests is that you get feedback quickly when something has gone wrong. The longer it takes to run the tests, the longer […]

By |March 19th, 2015|Agile, Practices|2 Comments

Diamond recycling (and painting yourself into a corner)

The post I wrote recently on recycling tests in TDD got quite a few responses. I’m going to take this opportunity to respond to some of the points that got raised.
Do we really need to use the term “recycling”?
The TDD cycle as popularly taught includes the instruction to “write a failing test”. The point of my article was to observe that there are two ways to do that:

write a new test that fails
change an existing, passing test to make it fail

It’s this second approach that I’m calling “recycling”. Alistair Cockburn says that “it’s a mystery this should need a name” and it probably doesn’t. However, I’ve regularly seen novice TDD-ers get into a mess when making the current test pass causes other test(s) to fail. Their safety net is compromised and they have a few options, none of which seem very appealing:

Roll back to last green
Comment out the failing test(s)
Modify the failing test(s) to make them pass again

Whichever way you go to get out, you’ll want to try to avoid painting yourself into a similar corner in future.
Why do tests that used to pass start failing?
Ron Jeffries suggests that this will only happen if the tests don’t “say something universally true about the problem and solution.” Several people (including George Dinwiddie and Sandro Mancuso) demonstrated that this problem can be solved by writing a series of tests that each say something “universally true.” However, to me, this seems like a similar approach to that recommended by Alistair Cockburn in his “Thinking Before Programming” post.

I’m a big fan of thinking before programming. In the courses that I deliver, I routinely prevent students from touching the keyboard until they’ve thought their way around the problem. But, it’s just not realistic to expect that […]

By |December 9th, 2014|Agile, BDD, Cyber-Dojo, TDD, Uncategorized|3 Comments

Recycling tests in TDD

The standard way that TDD is described is as Red-Green-Refactor:

Red: write a failing test
Green: get it to pass as quickly as possible
Refactor: improve the design, using the tests as a safety net
Repeat

TL;DR; I’ve found that step 1) might be better expressed as:

Red: write a failing test, or make an existing test fail

Print Diamond
One of the katas that I use in my TDD training is “Print Diamond”. The problem statement is quite simple:
Given a letter, print a diamond starting with ‘A’ with the supplied letter at the widest point.

For example: print-diamond ‘C’ prints

  A
 B B
C   C
 B B
  A
I’ve used Cyber-Dojo to demonstrate two different approaches so you can follow along with my example, but I recommend you try this kata on your own before reading further. .
Gorilla
The usual approach is to start with a test for the simple case where the diamond consists of just a single ‘A’:
> PrintDiamond(‘A’) 

A
The next test is usually for a proper diamond consisting of ‘A’ and ‘B’:
> PrintDiamond(‘B’)

 A
B B
 A
It’s easy enough to get this to pass by hardcoding the result. Then we move on to the letter ‘C’:
> PrintDiamond(‘C’)

  A
 B B
C   C
 B B
  A
 

The code is now screaming for us to refactor it, but to keep all the tests passing most people try to solve the entire problem at once. That’s hard, because we’ll need to cope with multiple lines, varying indentation, and repeated characters with a varying number of spaces between them.
Moose
The approach that I’ve been playing with is to start as usual, with the simplest case:
> PrintDiamond(‘A’) 

A
 

For the second test, however, we start by decomposing the diamond problem into […]

By |November 23rd, 2014|BDD, Cyber-Dojo, Practices, TDD|12 Comments

Eat your own dogma food

The software development community experiences fad after fad. Consultants and thought leaders dream up new methodologies; old practices are relabelled and promoted as the next big thing; flame wars are fought over names, tabs and brace position.

One of the few practices that has stood the test of time is that of “eating your own dog food”, which essentially means that you’ll be the first user of any software that you’re developing. In more polite (and optimistic) circles this is also known as “drinking your own champagne”. When it comes to development practices, I think we need to adopt the same approach.

If you’re going to make dogmatic statements about how software should be developed, then you as a developer should be prepared to stick to them yourself. No more  “do as I say, not as I do”. It’s time to eat your own dogma food.

By |January 14th, 2014|Agile, Practices|0 Comments

When is a tester not a tester?

No, I’m not trawling through my xmas cracker jokes. I was looking through the programme for DevWeek 2014 and both my sessions are tagged as “Test”. This is following a pattern started at ScanDev last year and followed by several other conferences at home and abroad.

Why am I bothered? It’s not that I mind being associated with testing at all. I don’t think of testers as a lower form of life. I *love* testers. It’s for the same reason that Dan North and Chris Matts started using the “should” word instead of the “test” word all those years ago – developers think that the test track is not for them.

Both my sessions at DevWeek are about types of testing that developers should be doing routinely. “So long, and thanks for all the tests” explores what makes a test valuable and what practices developers should consider adopting. “Mutation testing – better code by making bugs” is an alternative to code meaningless coverage metrics that can help developers ensure they’re sticking to their definition of done.

Q. When is a tester not a tester?
A. When they’re a developer.

You’re right. It’s not funny. So, it’s ideal for a cracker.

By |January 9th, 2014|Practices, Unit testing|0 Comments

Tedium or interest? The choice is yours.

During his excellent software craftsmanship session at Lean Agile Scotland, Sandro Mancuso made an analogy between software maintenance and gardening. A garden needs constant attention – lawns need cut; flower beds need weeded; old and diseased pl...
By |September 25th, 2012|Uncategorized|1 Comment