Practices

/Practices

Huge scale deployments

Deploying to 10 or 20 servers can be complicated. But what are some of the tools, tips and patterns for successfully deploying to 1,000s of servers around the world? Last month I participated in an online panel on the subject of Huge Scale Deployments, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps hosted by Electric Cloud. You can watch a recording of the panel:

The discussion progressed through a short deck of slides with big ticket titles. Plenty of interesting things were said (mostly by other people), but the main insight I had was that the practices discussed are applicable at any scale. Meanwhile, here are a few edited extracts from my musings on the video.

How do you practice huge scale deployments? How did the teams at Amazon make it work?

“I was a software developer at a little-known Amazon development center in Edinburgh, Scotland. … at Amazon, teams are responsible for provisioning environments, continuous deployment, everything on their own. Because we knew that we were picking up the pieces if it went wrong, we were going in making sure our deployment was good in early stages of the pipeline, because we had responsibility. When people have responsibility they make sure that things don’t break.”

“With regard to incentives – at Amazon there’s no need to dangle carrots, the teams know that everything that happens, designing, provisioning for expected loads, all the way to fixing problems in production, is their responsibility. So everyone practices ‘if it hurts do it more often’. It’s a natural human tendency to back off if there’s pain, but in software development, if something causes you pain you need to really work at that, you’re just not good enough at it […]

By |September 1st, 2015|Continuous Delivery, ContinuousDelivery, Musings, Practices|0 Comments

BCS: Agile Foundations

Preconceptions challenged
I really wanted to dislike this book, and in some respects I managed to achieve my goal. This is a book published to support yet another spurious agile certification (YASAC?), and I really don’t like that. The authors continuously use ‘Agile’ as a capital-A, noun, rather than the lower-case-a adjective that it clearly ought to be, and I have ranted about that before. And then they resort to anthropomorphising in the fashion “Agile doesn’t say …”, which I find hugely objectionable.

On the other hand, this book does deliver on its claim to provide “a comprehensive introduction to the core values and principles of Agile methodologies.” It describes, without too much embellishment, the fundamental and constituent principles of approaches that support a responsive development process.

The writing is clear and mostly correct, though some things are stated as facts, when they are clearly not. For example:

“There are generally three styles of Agile delivery …”
“A key concept in Agile is emergent or opportunistic design.”
“TDD validates that … the design is appropriate with minimal technical debt”

Broad horizons
This book really triumphs by continually going further and offering the reader broader avenues of discovery. There are continuous references to techniques, approaches and research that go far beyond the usual agile trivia. There’s mention of BDD, Real Options, Feature Injection, Cynefin, Maslow’s hierarchy of needs, Reinertsen’s economic model and Kotter’s 8 stage model – to name just a few.

Some of these are explained in reasonable detail, others are just mentioned in passing, but all contribute to making this book a valuable resource for someone who wants to escape the narrow confines of a typical agile transformation. There’s more to it than stand-ups and planning poker, after all.
Structure
The book is split into 4 sections:

Introducing Agile: A whistle stop tour around the manifesto, that still manages to mention […]

By |May 13th, 2015|Agile, Practices|0 Comments

Continuous delivery conversations

Last week I was at the CoDeOSL conference in Oslo. It was an interesting day, with some very good sessions, but as usual it was conversations had in the breaks that were of the most interest. I’d like to describe two discussions that I had that seem, on the face of it, to be in conflict.
Conversation #1
The night before the conference there was a meetup in a bar just around the corner from the conference hotel. I arrived a bit late and tired (because I was still recovering from the ACCU conference which had finished the previous Saturday) and struck up a conversation with Mike Long (who was also recovering from the ACCU conference). We were talking about patterns of interacting with source control that support continuous delivery – specifically the Automated Git Branching Flow that has been proposed by JOSRA.

Mike had an A5 leaflet that described the flow, including the diagram above, but I found that I needed to read the full article on the web before I could start to make sense of it. My distillation of their words is that:

continuous delivery depends on trunk always being potentially shippable
merging is always dangerous, so don’t allow complex merges onto trunk
automate the process of performing trivial, fast-forward merges onto trunk

Adopting this process (using the Open Source Jenkins plugins provided) leads to a flow that requires developers to merge from trunk before pushing their changes. As long as the automated merge to trunk that results from this push is a fast-forward merge, then everything is good. If it isn’t, then the push fails and the developer has to fix the problem locally, before attempting to push again.

Steve Smith voiced some scepticism on Twitter:

Conversation #2
At lunchtime I headed into Oslo to get an anniversary […]

By |May 2nd, 2015|Continuous Delivery, Practices|3 Comments

Making a meal of architectural alignment and the test-induced-design-damage fallacy

Starter
A few days ago Simon Brown posted a thoughtful piece called “Package by component and architecturally-aligned testing.” The first part of the post discusses the tensions between the common packaging approaches package-by-layer and package-by-feature. His conclusion, that neither is the right answer, is supported by a quote from Jason Gorman (that expresses the essence of thought over dogma):
The real skill is finding the right balance, and creating packages that make stuff easier to find but are as cohesive and loosely coupled as you can make them at the same time
Simon then introduces an approach that he calls package-by-component, where he describes a component as:
a combination of the business and data access logic related to a specific thing (e.g. domain concept, bounded context, etc)
By giving every component a public interface and package-protected implementation, any feature that needs to access data related to that component is forced to go through the public interface of the component that ‘owns’ the data. No direct access to the data access layer is allowed. This is a huge improvement over the frequent spaghetti-and-meatball approach to encapsulation of the data layer. I like this architectural approach. It makes things simpler and safer. But Simon draws another implication from it:
how we mock-out the data access code to create quick-running “unit tests”? The short answer is don’t bother, unless you really need to.
I tweeted that I couldn’t agree with this, and Simon responded:
This is a topic that polarises people and I’m still not sure why
Main course
I’m going to invoke the rule of 3 to try and lay out why I disagree with Simon.
Fast feedback
The main benefit of automated tests is that you get feedback quickly when something has gone wrong. The longer it takes to run the tests, the longer […]

By |March 19th, 2015|Agile, Practices|2 Comments

Entanglement (or there’s nothing new under the sun)

I’ve just read The Age of Entanglement : When Quantum Physics was Reborn by Louisa Gilder. It’s a tremendous book, looking at the interplay between great physicists over the whole of the 20th century. If you want to learn about quantum physics itself, this is probably not the book for you, but if a mix of science and history is your thing, then I can’t recommend this book enough. But that’s not why I’m writing this post.

As I read the book, there were two passages that jumped out at me because they were so relevant to experiences I have regularly. One was about testing and the other was about collaboration. Maybe I shouldn’t have been surprised, but there you have it – I was.

Experimental physicists understand that testing saves time. They sound a lot like developers who:
“want to slap it all together, turn it on, and see what happens.”
Inevitably:
“you can almost guarantee it’s not going to work right.”
Doesn’t this sound familiar? Their conclusion might sound familiar too:
“People always think you don’t have the time to test everything. The truth is you don’t not have the time. It’s actually a time-saving way of doing it.”

And then I found the description of a conference that sounded like a pre-cursor to the modern, open space ‘unconferences’ that have been springing up. An explicit acknowledgement that:
“the best part of any conference is always the conversation over coffee or beer, the chance meeting in the hall, the argument over dinner.”
This led directly to their decision to:
“organise their conference to be nothing but these events. No prepared talks, no schedule, no proceedings.”
I’ve heard of regular, private get-togethers like this that go on in the software community, where a selected group of invitees hole up […]

By |February 19th, 2015|Musings, Practices, Systems, Unit testing|0 Comments

Rolling Rocks Downhill

It’s almost a year since I posted a glowing review of “The Phoenix Project” – a business novel, following in the footsteps of Goldratt’s “The Goal”, about continuous delivery. If you haven’t yet read it, then I’m going to recommend that you hold fire, and read “Rolling Rocks Downhill” by Clarke Ching instead.

I should point out that Clarke is a personal friend of mine and a fellow resident of Edinburgh, so I may be a little biased. But I don’t think that this is why I feel this way – it’s just that Clarke’s book feels more real. Both “Rolling rocks….” and “Phoenix…” have a common ancestor – “The Goal”; both apply the Theory of Constraints to an IT environment; and both lead eventually to triumph for the team and the lead protagonist.

There’s something decidedly european about Clarke’s book, which makes me feel more comfortable. The reactions of the characters were less cheesy and I really felt like people I know would behave in the manner described. (Maybe that’s because these characters really are based upon people I know ;)) There really are cultural differences between Europe and North America, and this book throws them into sharp relief. The inclusion of the familiar problems caused by repeated acquisitions and a distributed organisation only added to the feeling that this story actually described the real world, rather than an imaginary universe powered by wishful thinking. The only time that my patience wore thin was around the interventions of the “flowmaster”, Rob Lally, who is ironically a real person.

A truly useful addition in Clarke’s novel is the description of the Evaporating Cloud diagram, which (I now know) is one of the 6 thinking processes from the theory of constraints. For […]

By |February 12th, 2015|Agile, Continuous Delivery, Practices|1 Comment

Always Be Coding

Last night I finally got around to watching Erik Meijer’s keynote from last year’s Reaktor conference. It was called “One Hacker Way” and, while it contains much that is apocryphal – or at least wildly inaccurate – it scores over the older, more pedestrian type of keynote in two important ways: first it is highly contentious, and second, Erik sports a bright, tie-dyed T-shirt.

Some of the highlights of this personal rant were:

“Agile is a cancer”
“TDD is for pussies”
Stand-ups eat into valuable coding time
Coders should be treated (and paid) like professional athletes
Hierarchical structures work for the army and the church, so they must be good

There were times that I found myself nodding in agreement, though – when he compared Scrum certification to a pyramid selling scheme, for instance – but mostly it was just opinion, conspicuously lacking the support of empirical data (which was one of his criticisms of agile).

I could go on, but if you’re still interested then I recommend you just go watch the video.

My final thought is that whole experience was vaguely reminiscent of a scene from the excellent movie “Glengarry Glen Ross”, where Alec Baldwin’s character harangues the dejected real-estate salesmen:
F*** YOU, that’s my name!! You know why, mister? ‘Cause you drove a Hyundai to get here tonight, I drove an $80,000 BMW. That’s my name!! And your name is “you’re wanting”. And you can’t play in a hacker’s game. You can’t write code. Then go home home and tell your wife your troubles. Because only one thing counts in this life! Checking in code! You hear me, you f***in’ faggots? A-B-C. A-Always, B-Be, C-Coding. Always be coding! Always be coding!
And, if you’re over 32 (when Erik […]

By |January 16th, 2015|Agile, doa, Musings, Practices, TDD, Unit testing|0 Comments

Reduce, Reuse, Recycle

From http://kids.niehs.nih.gov/explore/reduce/:
Three great ways YOU can eliminate waste and protect your environment!

Waste, and how we choose to handle it, affects our world’s environment—that’s YOUR environment. The environment is everything around you including the air, water, land, plants, and man-made things. And since by now you probably know that you need a healthy environment for your own health and happiness, you can understand why effective waste management is so important to YOU and everyone else. The waste we create has to be carefully controlled to be sure that it does not harm your environment and your health.
What exactly is “waste?”
Here’s a list (not exhaustive, by any means):

Code that doesn’t get used
Log entries that give no information
Comments that echo the code
Tests that you don’t understand
Continuous integration builds that aren’t consistently green
Estimates that you aren’t confident in

How can you help?
You can help by learning about and practicing the three R’s of waste management: Reduce, reuse, and recycle!

Reduce – do less. Don’t do something because that’s what you’ve always done. Do it because it adds value; because it helps you or a colleague or a customer; because it’s worth it.
Reuse – repetition kills. Maybe not today or tomorrow, but eventually. Death by a thousand cuts. Eliminate pointless repetition,  and go DRY (Don’t Repeat Yourself) for January. You’ll enjoy it so much, you’ll never go back.
Recycle – is there nothing to salvage? Really? When you find yourself building that same widget from scratch again, ask yourself “why didn’t I recycle the last version?” Was it too many implicit dependencies? Or was the granularity of your components not fine enough? Try building cohesive, decoupled components. Practice doing it – it gets easier.

By |January 12th, 2015|Agile, Cyber-Dojo, Musings, Practices|0 Comments

Do you practice BDSA?

I get to visit a lot of teams. I talk to my peers about their experiences too. For every agile success story there seem to be a load of less happy outcomes.

There are three common ways that I’ve seen agile ‘adoption’ go wrong, and they are:

Bondage – “You can never change the business. That’s just the way things work here.”
Domination – “That estimate is too big – make it smaller. These tasks are assigned to you.”
Sadism – “Work longer hours. Honour your commitments. Of course you will work on multiple projects”

If these sound familiar, then maybe your organisation uses BDSA – Bondage, domination and sado-agile.

By |December 18th, 2014|Agile, Musings, Practices|0 Comments

Half a glass

Is this glass half empty or half full?

There’s normally more than one way to interpret a situation, but we often forget that the situation itself may be under our control.

I often find my clients have backed themselves into a corner by accepting an overly restrictive understanding of what they’re trying to achieve. They will tell me, for example, that this story is too big BUT that it can’t be split, because then it wouldn’t deliver value. Or, they might be saying that there’s no point writing unit tests BECAUSE the architecture doesn’t support it. Or even, it’s not worth fixing this flickering test, because it ALWAYS works when they run it locally.

I encourage them to rethink the situation. A slice through that story may not deliver the final solution the customer needs, but it will act as a small increment towards the solution. Minor refactorings are usually enough to make a section of the code independently testable. That flickering test is undermining any confidence there might be in the test suite – fix it or delete it.

Don’t try to put a spin on a half-full glass – rethink the glass.

Instead of asking if the glass is half-full or half-empty, use half a glass.

(It turns out that wittier minds than mine have already applied themselves to this proverb – here’s a bumper collection.)

By |December 15th, 2014|Agile, Musings, Practices|0 Comments