Release Strategy: Size vs Frequency

Something I’ve run up against several times at several different organisations is making sure that the release process, policy and schedule is fit for purpose not only in terms of being compliant with best practice (such as ITIL) and controls (such as COBIT) and any other requirements within the business, but also to ensure that your change and release structure can cope with the volume of change.

Let’s assume your organisation has two broad categories of changes which are bundled into releases – new business functionality in the form of BCRs (Business Change Requests) and problem fixes.

We’ll also assume that you have a monthly release model which includes:

1. Scope and Design finalisation (1 week)
2. Build & Test finalisation (2 weeks)
3. Running an automated regression pack (1 week plus 2 days for final fixes to go as deltas on top of the main release)
4. Cutover and implementation activity (3 days)
5. Early Life Support (2 weeks) (ELS is an ITIL term which used to be called release warranty – in essence, a period of enhanced support following a release or where incidents/problems follow an accelerated support model compared with normal Business As Usual support).

Which gives us a 7 week model for a release (and if you have standard minor and major releases, a larger major release could run to eg. 10 weeks or more).

Note that I’ve used the term ‘finalisation’ for the first 2 stages. Some changes will be so big that they take longer than the release cycle to build and test, so will overlap one or more release cycles.

So several releases could be in progress at the same time. This isn’t necessarily a bad thing, in fact it’s usually a good thing. With larger organisations where there are clear segregations of duty, you don’t want people to down tools after they’ve done their bit and sit reading BOFH or The Oatmeal until it’s time for their bit of the next release cycle.

But be careful. Sometimes the same teams are responsible for multiple activities. An application support team could be fairly involved with both test activity, and implementation activity, and support activities during Early Life Support. If they’re doing all of these at the same time for different releases, you could have a problem.

And then we come to release sizing. You should have an idea of what your analysts, developers and testers can achieve in a given timescale. This should give you an approximation of how many BCRs and how many problem change requests you can process in a given window, and then you approximate for the year.

Now compare this with how many incident and problem tickets get raised in that same year, and how many business change requests are expected.

Will it fit?

In a recent real life example, we modelled a release based on current resources and timescales and calculated that of 11,000 incidents logged a year, roughly 1600 generated a change request to fix a problem (it was a fairly young and complex system). But the release model only allowed for 600 problem change requests to get processed and deployed whilst meeting the business need in terms of Business Change Requests.

We had three basic options:

1. Have more releases (twice a month rather than once a month)
2. Put more in the release
3. Change the mix of Business Change Request and Problem fix (ie reduce the amount of Business Change Request).
Some considerations with those individual options are:

1. Have more releases.
The resources were already fairly stretched. Depending on the time of year, 2 or 3 releases could be in progress at any one point. A lot more resources would be needed and there would also be less stability between releases (longer periods of stability between change is a nice byproduct of having release management in place). More resources could be recruited but budgets were tight so not a desirable option unless no other solution could be found. You’d also double the admin overhead – documentation, meetings. It would be a diminishing return.

2. Put more in the release.
Again, time constraints and issues with scalability. However, you could potentially have fewer but larger releases. Reduce admin overhead and have longer periods of stability. This is not a bad option.

3. Change the Mix.
Polticial implications within the business, but a case can be made to have a stability release once or twice a year where you only focus on problem fix. Again, another reasonable option.

We ended up doing both 2 and 3 – having stability releases once a year and then having fewer but larger functional releases. The net result was that we ended up exceeding capacity for problem fix (nothing is ever perfect) with a minor reduction in business change capacity which was actually caught up to almost parity when the backlog of problem fixes was sorted.

So consider your strategy and make sure you can cope with the volume of change. Simple, and sounds blindingly obvious, but can be overlooked.

(PS – I’ve never know ‘just have more releases’ to work without disproportionate increase in cost or reduction in quality).