YAML and the importance of Schema Validation

I’ve been working with YAML over the past few weeks as part of a new configuration framework, and have been quite impressed with the simplicity of the language and how easy it is to learn. I’ve in no way stretched its capabilities, but I did find that it has a seriously major weakness in that it doesn’t have an official schema language to validate it. I’m aware of tools like Kwalify that supposedly do the job – but where is the official schema language?

Having used XML for many years, I’ve come to depend on XML Schema to validate my documents, and the idea of using a language without any form of validation is a little worrying. Despite this flaw, we’ve still decided to use it – but it does leave us in a vulnerable position. We now have to create a set of tools to validate our configuration files, which is an unfortunate burden to the team – and something that would’ve been easily achievable in a matter of hours had we chosen to use XML. XML Schema is very well defined and very well known, so there’s no need for me to list the benefits it brings. But YAML has been around since 2004, so it’s quite disappointing that it still lacks such a fundamental tool.

So why is validation so important? The answer to that question depends on what you’re doing with the documents, and what the consequences would be if they were invalid. In our case, these configuration files will drive virtually every aspect of our site. If the configuration is invalid, it will directly affect our users… maybe just a few users, or maybe all of them. Neither of those scenarios is acceptable.

This is an area where XML really comes into its own. The existence of both XML Schema and XSLT are powerful additions to the XML toolkit, and they make a very compelling case to use it. Unfortunately the fact remains the XML is sometimes more verbose than what we would like it be, which is where YAML is the obvious winner.

In the meantime, we’re left with the job of developing our own tools to validate our configuration files. It’s not a big job, but it shouldn’t be necessary. Given a decent validation language, this should be trivial. That raises another issue with building custom tools for validation – we need to test that the validator is validating correctly. Again – not a big task, but it shouldn’t be necessary.

Hopefully the designers of YAML will one day design a schema language!

Advertisements

The “sticky shotgun” retrospective

I’ve been meaning to write this since early last week, but my first two weeks at Betfair have kept me fairly busy. After a nice chilled weekend, I finally have the time to get this down!

So I started a new job at Betfair at the beginning of April and have gradually been introduced to a few different ways of doing things. On my second day, we had a retrospective for the team’s previous sprint – which obviously I didn’t participate in, so I had the benefit of seeing how they do their retrospectives before being actively involved in one. The approach was subtly different from what I’ve done before, and I thought it worthwhile to share with others that work in a Scrum team.

Martin explained how it worked (for the benefit of a few of us that hadn’t done it their way before) and used the term “sticky shotgun” – which I think describes it quite aptly. Here’s how it works…

Everyone is given a marker and a pad of PostIt notes (or “stickies”) and writes down whatever comes to mind from the previous sprint. When they’re done, they all go up and stick them on the whiteboard in the correct zone (hence the term “sticky shotgun”). They use five zones: Start, Stop, Keep, More, and Less.

After the sticky shotgun, the stickies are clustered together into groups of similar issues. As you might expect, there are bound to be a lot of similar issues raised, as everyone has worked on the same user stories within the same team for the last two weeks. Instead of going through each individually, grouping them speeds up the process. To make sure nothing has been misinterpreted, they are always checked with the author to ensure that they’re put together in the correct group. We then briefly discuss the issues and try to identify what needs to be done to solve them; or in the case of the “keep” issues, what was done that makes the item worth keeping.

After the grouping and discussion, everyone is allowed to select their three most important groups. This helps rank the groups in order of importance so the team knows what needs to be addressed first.

This approach seems to provide a few distinct advantages over the way I’ve done retrospectives in the past:

Firstly, it allows a greater degree of participation. Not everyone is comfortable voicing their opinions in a group – particularly when they’re talking about the areas that need improvement. Allowing each person to note their thoughts down independently ensures that their opinions are heard.

Secondly, it speeds up what can be a very time-consuming meeting. It is very likely that a number of people have similar issues. This method allows everyone to voice their opinions without needing hours of everyone’s time. It also clearly identifies those opinions that are shared by a large number of people on the team.

Thirdly, it prevents snowballing. Quite often, one issue sparks off another and another – and before you know it, you’re not really focusing in the right direction. The point of the retrospective is to raise the issues, not to solve them. This approach provides for a relatively short period of problem identification, followed by a short period of problem definition, followed by a final short period of problem prioritisation.

Overall, it was very interesting to see a different slant on retrospectives, as they seem to be the quiet brother/sister in the Scrum family that no one talks about. There are hundreds of discussions over how we write user stories, how we estimate and assign story points to our user stories, how we sell Agile to customers – but I don’t recall seeing as much discussion over retrospectives.

Ultimately, it is the retrospectives that help us to see what we’re doing right or wrong and where we can improve. So it’s definitely a topic worth talking about.


Is that REALLY the tone you want to encourage?

You can tell when I’m on a lecture week at Oxford, because I suddenly have the time to catch up on all the blogging that I intended to do over the past weeks/months!

This is something I stumbled onto when browsing the latest news on LinkedIn Today. Startmate, a startup incubator in Australia, has provided funding for a bunch of companies – in particular, to a company called Chorus. According to TechCrunch:

Chorus lets companies reply to the angriest emails first, decrease risk of bad PR on social networks, and predict future trends and sentiment. The goal: more customer love.

Now, I completely see where they’re coming from… but is that REALLY the kind of customer engagement you want to encourage? Surely, if your customers realise they’re only getting fast responses when they’re angry, they’ll start to only send angry emails. It won’t be long before they start to associate their anger with your customer service.

Having recently dealt with terrible customer service from a “reputable” UK web brand, I have a fairly fresh perspective on this. Admittedly, this is just my own perspective and there’s absolutely no doubt others will act differently. I was quite polite initially, and only got angry when they simply refused to do what I had asked (which was to close my account). It took over 20 minutes for them to get the message, which is why I got angry. If I had known that they would only take me seriously (or would only consider me a priority) when I became angry, I’d simply get there sooner!

I’m sure there’ll be a lot of disagreement over this point – and I’m not a marketing expert – but I think it’s fair to say that you shouldn’t be surprised when your customers hate you purely because they have to in order to get a response from you! I can only imagine how well that would work in a classroom of young children 🙂

I just hope Chorus is intelligent enough to separate polite anger from rudeness.


Code Reviews… One Month Later

It has been one month since we introduced formal code reviews as part of our development process, and I thought it fitting to do a review of the code reviews – effectively a Code Review Retrospective. Rather than creating a list of n reasons why you should be doing code reviews, I’m simply going to take you through our experiences over the last 4 weeks.

I guess the best place to start is to look at the goals I had for our code reviews. I’ve summarised them as follows:

  1. Improve quality [1]
  2. Share ideas
  3. Share knowledge of the code
  4. Share ownership of the code
  5. Mentor junior programmers

You’ll notice that most of these points can be summarised in one word… TEAMWORK!

Teamwork

I have already seen improvement in a number of these points in only a month. We have ensured that we always submit code for review before allowing it to be handed over for testing, and this has caught a variety of issues in the early stages of development. The issues we’ve identified so far range from subtle issues such as usability, terminology and coding standards to bigger issues such as EIS integration and design.

I also found that the code review process has stimulated more internal dialogue within the development team about how we should go about solving some of the user stories on our task list. As a result, we have achieved the goals of sharing knowledge and ownership of the code and sharing ideas. In turn, this has improved the quality of what we deliver to testing and ensured consistency in our approach as individuals within the same team.

When comparing code that has been reviewed against code that has not been reviewed, it is quite clear that the code reviews help us to deliver code in a consistent manner across the team. That’s quite a big strength in my opinion, as it means that we’re all playing by the same rules, abiding by the same standards, and finding consensus in our approach. Maintainability has improved because the code always looks familiar – so less time is spent figuring out what the last guy did before trying to fix a bug or develop a new feature.

Personal Benefits

I’ve found that I can keep a much better eye on our delivery by simply monitoring the code reviews. It gives me an incremental view of the changes to the codebase, and allows me to check that we’re meeting the standards we’ve set for quality, simplicity, maintainability, and so on. I think this is essential for any Tech Lead, and also achieves the goal with minimal interference. The productivity of the team should not suffer unnecessarily in order to meet standards – there is often a less disruptive way of achieving this.

What to try next

When we started the code reviews, we were unsure whether to do pre-commit or post-commit reviews. We decided to start with post-commit reviews, as this would be the least disruptive to a familiar development process and would allow us to focus on one new area at a time. Now that we’ve been doing code reviews for a while and have stabilised the process, I’m quite keen to try pre-commit reviews for a sprint or two. I’m sure there will be obstacles to overcome, but the best way to evaluate the two methods against each other is to try them both.

[1] There are many different views on what “software quality” means. I personally like the points outlined in Wikipedia: http://en.wikipedia.org/wiki/Software_quality#Software_quality_factors


Thoughts on Joining an Open Source Project

Since starting my career as a software developer back in 2002, I’ve wanted to join an open source project. A few things have held me back from doing this… including inexperience in my early career, lack of available time to commit properly to a project, and the inherent difficulty in finding that one project amongst the thousands out there that I feel strongly about. Over the last few weeks I’ve been looking into this again and have now decided to join OpenMRS. The experience of choosing a project and getting stuck in has prompted me to write about why I did it and hopefully spark some new ideas for other developers wanting to get involved in open source projects.

Why join an open source project?
Everyone has the capacity to get involved in something and help out, and software developers are no exception. I’ve always been involved in building software for business use, which has a lot of challenges and rewards – but it doesn’t really improve anyone’s life in any significant way. My reason for joining an open source project is to contribute to something that really matters… something that might genuinely improve the lives of other people. For this reason, I’ve focused on finding a project that has this as one of its core objectives.

Why OpenMRS?
The answer to this question is pretty simple… I have fairly limited “free” time as it is, so I wanted to make sure I join a project that I believe in. OpenMRS is clearly focused on providing software that can improve the lives of millions of people – particularly in the developing world – so this immediately caught my attention. My original career plan was to study medicine and become a surgeon, but after finishing school I made the decision to study software engineering instead. I don’t regret that decision, but I’ve never lost my interest in medicine, and joining the OpenMRS project gives me the opportunity to work in both.

What’s next?
When joining any new project there is always a lot to learn. Others may have been involved for years, but you have to learn the ropes before you can be of any use to the existing team. So my focus for the next few weeks will be to learn about OpenMRS from every conceivable perspective. What problem is the system trying to solve? How does it work for the user? How has it been designed? How is the data model structured? However, the biggest challenge will be gaining enough understanding of the medical domain for the system to make sense. Very fortunately, the technical foundation is quite familiar territory – so the biggest obstacle is domain knowledge.

I’ll follow up in the coming weeks with some details on how to contribute to an open source project – entirely based on my experience, rather than a definitive list of “do’s and don’ts”. Until then, I’ll be buried in documentation!