Weekend Project: VR CMDB in Minecraft (Part 1)

With VR technology poised to enter the mainstream, I wanted to understand how ITSM data & information could be represented in a 3d structure. I’ve decided to start with the CMDB (configuration management database) as a set of data that is often complex and multi-dimensional. I’ve chosen minecraft to build these structures as it has an open sandbox mode, is extendable with community-written mods, will be supported by the Oculus Rift fairly soon and it was good enough for Octopus Deploy.
In part 1, we’ll get set up and build basic structures programatically. Parts 2 onwards are still being thought about…

Childs Play

Available from http://www.minecraft.net, you should use the PC/Mac client as the Xbox/IOS/Android client currently doesn’t support the mods we’ll need to install.
First things first. In the vanilla game (without any mods installed) I’ve built a representation of a mock OBASHI Business & IT diagram (minus labels) which can be used to describe / model systems from business process down to base infrastructure:
2016-01-17_22.43.52.png
Placing those blocks only took a few minutes, but if you’re new to the game, follow these instructions:
  1. Buy & install minecraft and set up an account to log in
  2. Start a new single player world and in the world options choose creative mode, and in the sub-options menu choose a ‘super flat’ world type. If you’re given the option to choose a template, I usually go with ‘redstone’ as it will give you a clean, flat, sandstone effect ground layer.
  3. Once you spawn in the game, WASD keys will move you around (use the mouse to look), double-pressing space bar quickly will let you fly (ctrl to descend and space to ascend) and E will bring up your inventory. Pick any block and use the mouse buttons to place and destroy blocks. Most blocks will float in the air one placed, but you need to place them on something first.
Placing blocks by hand in any volume will be too time consuming to be useful. Vanilla minecraft offers a mechanism to automate this using something called a command block which lets you store and replay a single command (in this case a command called ‘setblock’) but this only lets you place one block at a time. You can of course chain multiple command blocks together, but this is nearly as much faff as placing them by hand.

Modders to the rescue

Part of the great charm of minecraft lies in the astounding number of freely-available third party modifications to the game. From automated quarries to nuclear fusion reactors to rockets, pretty much everything is catered for. You can even programme computers inside the game (how meta is that?) and that’s what we’re going to use to automate the build of our CMDB – Computercraft (aka CC) v1.75.
ComputerCraft does many things inside the game, but the bit we’re interested in is the ability to script and execute large lists of setblock commands. You can either do this by placing a standard CC computer next to a command block, or by placing a CommandComputer in the game which combines both.
Installing Computercraft is not so trivial. You have two options: either you can manually install the Forge ModLoader (by unpacking and repacking the JAR java file) and then copying over the CC mod, or you can use a ready-modded client from the FTB or Technic launchers which are free to use once you’ve bought minecraft. For the rest of part 1, I used the FTB Infinity modpack (which contains a lot more than CC). If this CMDB-minecraft experiment proves popular, I’ll look at making a dedicated client with all mods included for a one-click install.
Back to the experiment…
If we now create a small program called cmdb and write the following commands:
2016-01-24_16.10.24.png

and then save and run it, we get the following result:
2016-01-24_16.10.38
Now we can programmatically place blocks. Walking through the above script, the first line sets coordinates relative to the block the computer is placed, and Commands.async.setBlock is the syntax to invoke the command to place a block (the async part allows all blocks to be placed simultaneously – though minecraft will struggle with more than 1000 blocks placed at a time) and the coordinates set the position with the very last number, 35 here referring to the block ID to be placed – in this case, grey wool.
In minecraft, x and z coordinates are your north/south/east/west coordinates, and y is vertical (height). In the above example, I’ve offset the z axis by 15 blocks so it doesn’t place the blocks right on top of you. Notice that we haven’t drawn any links between blocks, these are inferred through spatial relationships (above or next to other blocks implies a link).
Building larger structures is now a case of pasting in bigger and bigger scripts, and since we’re incrementally increasing one number at a time (in most cases) and using very similar commands, I’ve used Excel’s fill and concatenate commands to build bigger data sets.
Screen Shot 2016-01-24 at 16.14.50.png
This leads us to one limitation of computer craft, you can’t copy and paste into it. CC does however let you connect to pastebin.com – a free sharing site. Simply go to pastebin, create a new paste containing your commands, note the ID (in the example below the ID is 8iYYj3x5 and use this inside Computercraft with the following syntax:
2016-01-24_16.18.02
The pastebin and get are pretty self explanatory. The id is the ID you got from the URL and the program name is the program you want it to be saved inside the computer in minecraft. In this case I used cmdb30 (as I’m iterating quite fast…).
So now we have a repeatable process:
  1. Build the block list in excel
  2. Copy the list to pastebin
  3. Import to CC and execute
With this and some basic excel formulas we can build 3d models fairly easily. 
Using data validation and vlookups in excel we can also start to adjust axis offsets to force certain CI classes to specific layers in the diagram, and if we relate time to the Z-axis here as an example, we can start building diagrams comprised of multiple snapshots over time. 
Using the example of this script http://pastebin.com/8iYYj3x5 I’ve shown 3 slices of the same 2 layer B&IT style diagram which could represent the same infrastructure and server configuration items over 3 different time periods.
2016-01-24_15.49.09
Which brings us to the interesting questions – what do we use the different dimensions for?
A standard OBASHI B&IT diagram for example would be 2d. I’ve sort of envisaged in the diagram above that the third dimension is used for time; as you move forward past the blocks you go back in time. But we actually have more than 3 dimension. There’s rotation, and we can also use extremes of the existing dimensions to add another, for example, if an existing B&IT diagram is 1000 blocks wide, we can move ourselves 1001 blocks away in any of the cardinal directions (minecraft has a 256block height restriction) with a single ‘teleport’ command and start drawing again.
In part 1 we haven’t really done much more than play with placing blocks programmatically, but we have identified a few areas to focus on for future investigations:
  1. What do the physical dimensions relate to?
  2. Are spatial relationships enough or do we need to be able to draw lines?
And a couple of practical/technical issues have occurred to me in the course of this I want to try and address in the future:
  1.  Plotting out block lists in excel is admittedly easier than doing it by hand, but will generate a lot of data. Can we be cleverer about this?
  2. Blocks are pretty dumb, as in they contain no other information (other than their colour). How do we get this information (e.g. server name, IP address) shown in this model?

It would be nice to see this taken further, to the point where it’s showing meaningful relationships between data and possibly provide some stimulus for tool vendors to start thinking about how they’re going to work in the VR space – if at all.

Got something to add or suggest? Post away in the comments below! Part 2 is on its way. 

#toolhacking for fun and ITSM – Release Planning with Trello

I love Trello. For those unfamiliar, Trello is a SaaS kanban board / taskboard. It’s free (mostly) and is virtually idiot-proof to set up and start using.

You don’t have to adopt the kanban methodology to use a kanban board, Trello works just as well as a visual task board, the tasks moving from left to right across columns as they progress, and the columns representing the states of the task (eg. ready, doing, done).

Here’s a quick example:

Introduction Before

Before

Introduction After

After

 

Task Agility

Agile teams should be very familiar with kanban boards (as well as the kanban method) and will probably already be doing some kind of release planning, so this article isn’t really for them.

The agility I have in mind is the one that lets you quickly reassign an object from one group to another, for example a release scoping candidate from one release to the next.

 

Use case 1 – Lots of components, lots of regular windows, lots of movement, fast cycle time

Imagine being a release manager on a busy project delivering several systems, perhaps in a Service-Oriented Architecture with no (or infrequent) functional dependency between the systems and individual development/test teams each building at different paces.

Imagine you’ve agreed 1 release window a day with your testers and other stakeholders, and you also have to factor in maintenance slots, data refreshes and the delivery of new hardware to keep up (because performance testing is optional, said no release manager, ever…)

Your Trello task board may end up looking something like this:

Release Scoping Planner

Example release scoping board

In the above example, I’ve defined a column on the left to store potential component release candidates not yet allocated to a release, and the next 3 days worth of releases have a column each. I’ve used some inbuilt labelling functionality that comes with Trello to show red/amber/green flags for each candidate – something I use to tell me how likely the candidate is to make the cutoff for that days release or how downright invasive it is.

I usually keep it this simple if I’m in this kind of situation. Simple means fast and fewer chances of mistakes. The features and bugs in each component will be held somewhere else, all I need to know is which bit of code is going on which day and what else is happening.

 

Use case 2 – Tracking the status of several slower releases

Now step back from rapid releases and think about the releases where you’re still running several at the same time, but each release takes time to build and package and get all the documentation and training done, maybe you have some 2-3 day auto test packs to run on it as well.

Using a kanban board here lets you visually track the status of each release, like this:

Release Status Tracker Simple.png

Simple Release Status Tracker

In this example, I have some monthly releases in progress as well as a quarterly patching job. I’ve used RAG status labels to show risk of each release, and you’ll also notice I’m showing a date on each (using the Trello ‘due date’ feature) to let me see the deployment date of each release at a glance.

Additionally, Trello support templated checklists. So if you want to build a release checklist for simple releases up to full service releases with operational acceptance and service introduction type activities, you can do this too:

Checklist.png

Example Checklist

 

Summary

Visual taskboards like are a useful alternative to spreadsheets or gantt charts. They’re great at presenting information simply and allowing you to be flexible and agile in your planning.

I use Trello a lot (both free and premium versions) as it’s so simple and fast to use. There are alternatives with some richer features which I may cover in a future article, but if you’re struggling in scenarios similar to the ones above, it may be worth giving Trello a try.

 

 

 

 

This ITSM was held captive for years. You won’t believe what happens when it’s released!!!

Ok, I lied. No ITSMs, ITILs or Agiles were harmed in the making of this blog entry, and there’s no heartwarming video of an ITSM being released from captivity, but here’s some cartoons I commissioned some time ago for a website I was experimenting with for event listings which I never really made the time to properly promote.

The design ideas were all mine, but since I have the drawing and colouring skills of a ham sandwich, I found a very talented artist in Tim Teague (Sleazyfish Studios) who turned my ideas into something better than I hoped for.

So here’s the first 12 episodes of my comic strip “Plan, Do, Cheque, Run!”

PDCR final strip 1PDCR final strip 2PDCR final strip 3PDCR final strip 4PDCR final strip 5 finPDCR final strip 6 finPDCR final strip 7PDCR final strip 8PDCR final strip 9PDCR final strip 10PDCR final strip 11PDCR final strip 12

The ITIL Manifesto v1.0

The ITIL Manifesto is available here:

The ITILManifesto 1.0

In my last post, I said we were going to have a vote on distilled principles, but we didn’t get many, so I’ve included all of them verbatim from Simone, Peter and Daniel.

Rightfully, this would be called a 0.1 (or a ‘beta’ if we were a 2011-era startup SaaS product…) but it stands on its own and there’s some good content.

I’d love to see this evolve. Maybe some form of quarterly CAB to process changes, additions, edits and rewrites. Who knows. Maybe this becomes part of another initiative. But I’ve included the original ideas as well as the principles so you can remix this to you hearts content under the Creative Commons license which also allows for commercial works. Please do attribute anything you make from this material back to the ITIL Manifesto.

I’ll see what the community wants to do with regard to improving and evolving this, but for now, thank you all for your input, writing, google hangout attendance and more.

Publishing an ITIL Manifesto

The ITIL Manifesto initiative has been running for a little under a year.

We have:

  • 107 ideas grouped 6 ways to align with the 6 ITIL value propositions
  • A draft vision/mission statement

We don’t have:

  • A manifesto

I’d like to get something written and published, an ITIL Manifesto 1.0 which can then be improved, changed, rewritten if necessary.

How do we get to Manifesto 1.0?

Since we have the original raw ideas, a structure (the 6 value proposition areas) and a guiding vision, we have most of the necessary raw materials to create a manifesto.

The materials have been pulled together into a slide pack (also available as PDF) – links below –  to allow you as individuals or self-organising groups to produce 6 principles (one per group of ideas) and submit them back to me by the end of may. The pack has been written as a sort of ‘activity workbook’ to let you print off, carry around with you and scribble on for a few days whilst you think about the best way of writing a principle that covers some or all of the ideas in each idea group.

We’ll then vote, as a community, on the best principle in each area.

The winning principles (simple majority) will be published as the ITIL Manifesto v1.0.

 

How will changes be made to the manifesto?

If only there was a best practice framework which captured good practice for managing change…

I suggest:

  • A quarterly CAB which is also a decision authority
  • A system for recording RFCs, getting community votes on them and tracking them through a simple lifecycle

What’s the timeline?

Now to end of May 2015 – Community creates manifesto principles
Early June 2015 – Public voting on the submitted principles
Late June 2015 – Publish v1.0 of the Manifesto

 

Get involved?

  1. Download one of the following versions of the resource pack (PDF or PPT)
  2. Craft 6 principles using the pack
  3. Send them back to me (instructions in the pack)

 

The ITIL Manifesto – what it is and why you should get involved

A few weeks ago, a discussion on twitter led to the suggestion of an ITIL Manifesto. An attempt to codify and simplify the core tenets of ITIL from the triple perspectives of experience, theory and aspiration.

Some of the discussions since have talked about a manifesto as a way of remembering, communicating and promoting those principles and values to colleagues, customers and other key stakeholders. They should be broad enough to cover the essence of 2000 pages of ITIL, but grounded enough and easy enough to remember so that the swamped professional can take a step back, think, remember that principle and answer from a position of best practice pragmatism rather than theoretical fanaticism.

Some people have focused on using the manifesto as a way of steering a possible next version of ITIL (if there’s even going to be one), others have argued that it’s a way of making V3/2011 more accessible. Some have argued that it should be an ITSM manifesto, others that it should be ITIL (a vote on the now-closed tricider stream was 5-2 in favour of ITIL) although with only 7 votes out of the 928 unique visits it obviously wasn’t a hugely contentious issue.

AXELOS have been brilliant in supporting us in this. They’ve actively helped drive the twitter discussions, they’ve posted updates to their website and the Head of ITIL practice – Kaimar Karu – has been involved in the community discussions. They’ve not tried to steer the content, but seem happy to let it evolve and grow where it needs to.

The community have likewise been brilliant. Over a hundred suggestions were gathered in the first phase from a diverse range of people. The well known ITSM leaders and authorities have contributed with energy, but have also in more than one case suggested they themselves step back and let fresh voices be heard.

This is where you come in. Yes you, reading this now.

The fresh voices mentioned are the people who haven’t necessarily been active in the ITIL or ITSM community to date, or maybe have been but want to step it up. It could be non-IT professionals who have opinions about IT Service Management. It could be CIOs, DBAs or developers who think we’ve got it all wrong. It could be experienced consultants who have been working so hard at solving problems that they’ve not realised there was an ITIL manifesto initiative and have some 24-carat solid gold insights which ITIL didn’t even know it needed – until now.

The opportunities to get involved are there for everyone. Now that we’ve gathered a mass of raw ideas, it’s time to group them, distil them and craft them into principles or manifesto statements. I’d like to offer multiple routes to do this – both traditional “join a working group” and the less traditional “here’s the original material, go away and self-organise – you’ve got 3 weeks.”

The first meeting of the original few volunteers/contributors is tonight, where I’ll be advocating for all of the above and so, I believe, will all the other participants. We’ll mostly be talking about how we organise and coordinate and who’s going to do the admin-y stuff. I’m going to set up some wider discussions following on from that (Google Hangouts is max 10 active participants) so stay tuned.

If you want to be a part of ITIL’s present, and maybe influence its future, get involved now.

Also keep an eye on #ITILManifesto on twitter, and see the links in the wiki above for the Back2ITSM facebook and google plus groups.

The real problem with frequent emergency changes

Changes are decisions, balancing risk and reward. In the language of Decision Theory, they are classed as normative or prescriptive decisions (i.e. identifying the best decision to take) which rely on access to good information and are best made rationally (rather than irrationally, emotionally or because there’s an expensive marketing campaign about to start that nobody told IT about). The processes and tools used to analyse decisions and their outcomes are usually referred to as Decision Support Systems, and can be technological, procedural, or a combination.

For IT changes, the Decision Support System is the change policy, process, tool and the people who manage, review and approve them. Using these, it’s possible to take one big decision (“should we do this change?”) and carve it into smaller and more manageable decisions, e.g.:

  • Is the Request For Change correctly filled out with meaningful information?
  • Has the technical peer review been completed?
  • Is there a compelling benefit case?
  • Has the change been tested?
  • Is there a fully resourced and tested implementation and remediation plan?
  • Will it clash with another change?

etc etc.

Most of these smaller decisions are human and procedural, though technological quality gates can be included too, such as simulated code builds, static code analysis outputs, configuration data etc.

By splitting bigger decisions into smaller ones, it’s possible to spread the burden of decision-making across several people, although care must be taken to avoid making all of the sub-decisions in isolation (or being overly reliant on theoretical models as per the Ludic Fallacy)  – which is why the ultimate decision (“should we do this change?”) is best taken in the context of the real world situation in your organisation – you and I know this as the CAB.

Most change managers and change approvers know this concept of splitting up big decisions intuitively, which is why we reject poor quality change records, ask for peer reviews, demand to see a test completion report and make people wait for CAB.

The concept of a change process then is a good one. It leads the requestors, assessors and approvers through a logical sequence of smaller decision-making until there is enough information to make that ultimate decision in a relatively straightforward manner with good information to base it on. This process can take days, especially as many of the smaller-decision makers have day jobs, which is why many organisations have weekly scheduled CABs and cutoff dates.

But the benefit of smaller decisions is missing from emergency changes because they simply don’t have time – they’re emergencies. The smaller decisions end up getting rushed (“well, it’s got half an implementation plan, and we tested a bit of it, is that good enough?”) and the burden of the smaller decisions gets pushed to the emergency change approver – which despite good intentions in your emergency change policy can end up being a single senior manager, or sometimes the change manager alone.

If this happens once a month, it’s probably not a big deal. The person who ends up being asked to decide might take a little time to ask a few people their opinions, form an emergency CAB, push back on some shoddy testing, rework the implementation plan, speak to the business and so on. But if it happens several times a day, that person is going to get fatigued, stressed, and ultimately end up making bad decisions.

And this is the problem with having too many emergency changes. Bad or rushed decisions either mean blocking something the business really needs through being too risk averse, or present an unnecessarily high risk to your production services by being too relaxed.

New ITSM Events listing site

Having read a few posts on various social media sites about people looking for combined lists of ITSM events and not finding much apart from the vendors’ own sites, I decided to make one.

Bear in mind that my coding skills are rudimentary, in fact watching me try to write code is probably like watching a dog learn to play chess, so all the hard work was actually done by my wife…

The end result is http://www.itsmevents.com and there’s a link at the bottom if you want to submit your own event. A few additional features are planned over the next few weeks, but it’s intended to be a simple list of ITSM events.

 

ITSM Events screen clip

 

 

Becoming an ITIL Expert

The ITIL® Expert qualification, the penultimate award in the ITIL stack, is not easy, but it is achievable, and if you put in the study time, and pay attention, then it is fairly straightforward.

Contrary to popular belief, there aren’t any nasty trick questions. There are tough questions in the intermediate exams and higher, but tough in a way that gives you an unclear choice between picking the answer that is ‘most ITIL’ or ‘best answers the question/case study’.

Your first choice, after deciding to pursue the Expert qualification (either from start to finish or over a number of years as I did) is which route to take.


Expert Paths

 

There are 3 main routes to your Expert qualification:

1. Conversion from V2 Manager

As a single course/qualification, this was retired in 2011. However for those who missed the boat and hold the V2 manager’s cert, there is an expedited route to ITIL Expert:

  1. ITIL Foundation / Foundation Bridge, then
  2. ITIL Intermediate Lifecycle : Service Strategy OR Continual Service Improvement, then
  3. ITIL Managing Across Lifecycle (MALC)

2. The direct Lifecycle or Capability paths

This involves taking the following exams:

  1. ITIL Foundation
  2. All ITIL Intermediate Lifecycle or all Capability certificates
  3. Managing Across the Lifecycle

You’ll notice that you get a choice in step 2. The Lifecycle qualifications are aimed more at the consultant or manager, whereas the Capability qualifications are aimed more at the practitioner.

There are other differences as well. If you choose to study for these in the classroom, then the Lifecycle courses are all 3 days each so you’ll spend 15 days out of the office. The Capability courses are 5 days each so you’ll spend 20 days. This can be an added cost for contractors & consultants (on daily rates) or those who have limited training days available.

Cost of the courses is another consideration. A quick check of the list prices for each course on a popular UK training provider’s website shows Lifecycle courses costing £1245 and Capability courses costing £2300. This means that the Lifecycle route will cost you £6225, and Capability £9200. And you still need to pay for the foundation and MALC whichever route you choose. Even if you go down the Lifecycle route, you’re not getting much change from £10k – though this gets a lot cheaper when done via online learning (more on this later).

Note that if you’re paying for this yourself as an individual rather than a company, tell the salesperson when you call for a quote – they usually do discounts of 20 – 30% for individuals, and multi-course discounts are also available. Discounts tend to be greater in Q2 and Q3 of each year when training budgets have been spent (or held on to). Also make sure you ask about free re-sits in case you don’t do well on your first attempt.

3. Indirect path (mixture of Capability, Lifecycle & others)

The final route to Expert is a hybrid approach. You can combine Lifecycle and Capability qualifications as well as some of the shorter specialist courses to get the credits you need – but be careful. Where there is overlap in the various syllabuses, the examiner does discount the point scores for individual certificates as explained in the following links:

http://www.itil-officialsite.com/Qualifications/RoutesintotheCurrentSchemeForEarlierITILCertificateHolders.aspx

http://www.itil-officialsite.com/Qualifications/ITILQualificationLevels/ITILIntermediateLevel.aspx

http://www.itil-officialsite.com/Qualifications/ITILCreditSystem.aspx

http://www.itil-officialsite.com/Qualifications/ComplementaryQualifications.aspx

The routes to ITIL Expert are summarised in the diagram near the top of this page, and there’s also a very good diagram by Peter Lijnse at ServiceManagementArt available here which also shows how the topics are carved up differently between the 5 Lifecycle and 4 Capability courses.

What about online?

It’s possible to go from novice to Expert via online-only study. But should you? There are major cost benefits (up to 65% off the price of classroom) to studying online and online delivery & examination methods are maturing rapidly, though make sure you stick to reputable and accredited training organisations. Some recommendations are included later.

Foundation

Studying ITIL Foundation online is both cost & time effective and can be a very viable option for many people. It’s also the easiest of the ITIL qualifications, though what it lacks in depth, it more than makes up for in breadth. You’ll cover a lot of concepts in a short space of time, and the main value I found in the classroom was in having the instructor give a joined up overview of the whole end to end lifecycle.

You can order ITIL online training from many providers, or buy single books / complete certification kits from online retailers.

Booking & taking the foundation exam is also pretty straightforward – one of the main popular providers is Prometric/EXIN.

Google will show many online training providers, but if you want a recommendation then check out ITIL Training Zone for Online ITIL Foundation Courses.

Intermediates & MALC

Taking the intermediate exams online is a little bit more involved. Whereas for the foundation you could just read a book and rock up to a test centre, for intermediates you need to have studied via an accredited training provider (ATO) and for a minimum length of time.

There are some ATOs who are accredited to deliver online training however, such as ITIL Training Zone who deliver the full range of online ITIL courses.

Taking exams online from your own home or office is also possible. You can usually sit a webcam-proctored exam, and some exam boards even offer the chance to have an independant third party monitor you whilst you sit an online exam (with dedicated software which locks the mouse/keyboard to the exam app and fails you if you exit the application). It is also possible to go and sit (once you’ve paid) one of the classroom exams run by other training companies as long as you have you attendance certificate from the online ATO (though this policy may vary from company to company – check first).

Having taken Service Operations via the online route, I’d recommend it for anyone who finds self-paced learning preferable, or who doesn’t want to take the time off work, or who already understands the material well and just needs it formalising before taking the exam. It’s also a lot cheaper. Most of the core learning courses are around $500, rising to $800 or so for the full package including exam prep questions and the exam voucher.

Seriously consider getting the exam prep kits. They’ll clue you into the question formats and get you used to thinking the right way. Where the foundation was a fairly easy multiple choice with one right / three wrong answers, the Intermediate exam questions have 4 right answers, but some are more right than others.

 

10 ITIL Intermediate and MALC exam tips

1. Study. Not just the specific course, but scan back over the foundation material as well.

2. Take practice exams and read the answer rationales. You may not agree with the rationales, I didn’t for half of them, but they give useful guidance on what the examiners are looking for.

3. Ignore Bloom’s Taxonomy easy/medium/hard ratings. The difficulty varies from person to person. It may match your ability perfectly, it might not.

4. Don’t change your answer unless you made a blindingly obvious mistake, but even then be careful. Unless you have a good 10 minutes left on the clock to really get back into a question in detail, leave it.

5. The best answer is usually the one that gets 3 things right: it’s the most ‘ITIL’, it best matches the case study, it best answers the specific question.

6. Because ITIL focuses on value, answers which talk about ‘value’ or ‘business value’ can be words to look out for in correct answers – but don’t rely on this, especially if you can’t make the rest of the answer work.

7. If going down the Lifecycle route, take Service Strategy & CSI last before your MALC. Strategy & CSI have the most overlap with the MALC syllabus and will be fresh in your mind.

8. For Lifecycle exams, avoid the answer which rushes in and starts doing things. The best answer is usually one that takes a step back and considers the whole situation and thinks more strategically (exception if the word ‘tactical is used’).

9. Avoid answers which involve some kind of ‘wacky scheme’ (like starting your own IT Hardware business when the question was asking about problem management).

10. Don’t mistake ITIL world for your world or experiences. Just because you’d do it one way doesn’t mean ITIL will.

 

Good luck – let me know in the comments how you got on!

DevOps and ITIL: Continuous Delivery doesn’t stop at software

DevOps. The next saviour of the world. A combination of agile delivery and super-effective provisioning scripts with a sprinkling of software-defined (virtual) infrastructure.

Whilst ITIL v3 and 2011 seem (on the surface at least) to support a waterfall approach, it’s imperative to step back and think for a second.

The concept of identifying a need for change to your service portfolio hasn’t altered. It still relies on identifying demand and building a decent relationship with your customers.

You still need to design it, and you’ll still need to operate it and continually improve it.

The biggest difference here is your transition strategy, transition design and the execution of transition. If you have waterfall style release policies and service validation practices then these won’t be capable of dealing with DevOps style Continuous Deliveries out the box. Sure, you can slice and dice your delivery packages down into smaller and smaller pieces, but this is still going to be at odds with the kinds of flexibility DevOps promises.

So let’s step back to Service Strategy and Service Design. In a traditional waterfall approach, an example of a high level transition plan is:

 

Delivery Contents
Delivery 1 (June)
  • Functional requirements a, b & c.
  • Pair of DB servers, single application server, single web server, no loadbalancer.
  • Non-functional requirements x,y,z meeting 75% of targets.
  • Documents 1,2 & 3.
  • Ops briefing at t-3 weeks.
  • ELS to +5 days
Delivery 2 (Nov)
  • additional functional requirements d, e & f.
  • Additional webserver with loadbalancer.
  • Non-functional requirements x,y,z meeting 90% of targets.
  • Documents 4,5 & 6.
  • Full training plan from t- 5 weeks
  • ELS to +15 days

 

These are packages. Discrete units. Similar to this site’s logo (which may need to change now!). You have built entire releases comprising some software components, some hardware, some documents, some knowledge transfer into a bundle. And this works really well for big deliveries, and even slicing these up into smaller 2-3 week deliveries is possible albeit with some overhead.

But what about daily? What about the developers with testers sat next to them checking in quality tested code on a daily basis to an automated build server, having it integrated, auto test packs run and then pushed to production? How are you going to slice release packages across multiple developers cutting code for variable prioritised lists of business requirements?

Packages now become unwieldy. You need a different transition strategy, and a different transition design, and you need to execute them differently from the packaging or bundling concept.

How? Rather than a conveyor belt of packaged releases, this becomes concurrent streams of smaller deliverables, for example:

 

Stream Frequency Roles /
Processes
1 – Code objects checked in, tested & deployed Daily R&DM, SACM
2 – Knowledge updates created & tested for the new functional requirements Every other day SACM, SV&T, Knowledge Management
3 – Formal Operational acceptance tests 1-2 times/week SV&T, SLM, BRM, App/Tech Function Managers
4 – Hardware deliveries As required R&DM, Tech Mgmt
5 – ELS & Continual Service Improvement Daily CSI, SLM, BRM, Service Owner

 

You’ll notice that I’ve referenced existing ITIL processes, functions and roles. We’re still using best practices (or good practices if you prefer) which are tailored to your organization and may need to be tailored for this approach. But the concept of creating a new knowledge CI and linking it to a functional requirement doc, service & hardware CIs and accounting for its status and verifying it is not new.

You need good tooling, you need reasonably mature Config Management and ditto for knowledge management, but there’s no reason why if it isn’t already happening in your organisation that you can’t use your DevOps approach to implement good config & knowledge management at the same time.

Change Management is still change management. Very little needs to (pardon the pun) change. It still acts as an approval/authorisation broker at various stages over the lifecycle, it can still devolve production release approval to the automated build and integration test process. But this is a policy addition driven by your transition (and service) strategy, not a whole new process.

You’ll also notice I’ve lumped ELS (Early Life Support) and CSI together. ELS never ends under this approach. But think back to the joining of Dev and Ops and you’ll realise that ELS loses significance. I’ve kept it here because there are some really good ELS practices you’ll want to keep: good defect/incident management at a heightened level with a razor-sharp focus on issues caused by change, regular sunrise/sunset meetings with sponsors and business owners. But I’ve merged it with CSI because this complements the very nature of Agile and also crucially gives you a perfect opportunity for a closed-loop feedback system into either operational, tactical or strategic improvements on a daily basis.

So you don’t need a dedicated DevOps process. You need a DevOps -focused transition strategy, design and execution. You need sensible tweaks to existing processes and policies. You need good tools for build, test, config and knowledge. You need good practices and good leadership.

You can do all of this with existing ITIL, just as you can with any other framework.