YAML and the importance of Schema Validation

I’ve been working with YAML over the past few weeks as part of a new configuration framework, and have been quite impressed with the simplicity of the language and how easy it is to learn. I’ve in no way stretched its capabilities, but I did find that it has a seriously major weakness in that it doesn’t have an official schema language to validate it. I’m aware of tools like Kwalify that supposedly do the job – but where is the official schema language?

Having used XML for many years, I’ve come to depend on XML Schema to validate my documents, and the idea of using a language without any form of validation is a little worrying. Despite this flaw, we’ve still decided to use it – but it does leave us in a vulnerable position. We now have to create a set of tools to validate our configuration files, which is an unfortunate burden to the team – and something that would’ve been easily achievable in a matter of hours had we chosen to use XML. XML Schema is very well defined and very well known, so there’s no need for me to list the benefits it brings. But YAML has been around since 2004, so it’s quite disappointing that it still lacks such a fundamental tool.

So why is validation so important? The answer to that question depends on what you’re doing with the documents, and what the consequences would be if they were invalid. In our case, these configuration files will drive virtually every aspect of our site. If the configuration is invalid, it will directly affect our users… maybe just a few users, or maybe all of them. Neither of those scenarios is acceptable.

This is an area where XML really comes into its own. The existence of both XML Schema and XSLT are powerful additions to the XML toolkit, and they make a very compelling case to use it. Unfortunately the fact remains the XML is sometimes more verbose than what we would like it be, which is where YAML is the obvious winner.

In the meantime, we’re left with the job of developing our own tools to validate our configuration files. It’s not a big job, but it shouldn’t be necessary. Given a decent validation language, this should be trivial. That raises another issue with building custom tools for validation – we need to test that the validator is validating correctly. Again – not a big task, but it shouldn’t be necessary.

Hopefully the designers of YAML will one day design a schema language!


The “sticky shotgun” retrospective

I’ve been meaning to write this since early last week, but my first two weeks at Betfair have kept me fairly busy. After a nice chilled weekend, I finally have the time to get this down!

So I started a new job at Betfair at the beginning of April and have gradually been introduced to a few different ways of doing things. On my second day, we had a retrospective for the team’s previous sprint – which obviously I didn’t participate in, so I had the benefit of seeing how they do their retrospectives before being actively involved in one. The approach was subtly different from what I’ve done before, and I thought it worthwhile to share with others that work in a Scrum team.

Martin explained how it worked (for the benefit of a few of us that hadn’t done it their way before) and used the term “sticky shotgun” – which I think describes it quite aptly. Here’s how it works…

Everyone is given a marker and a pad of PostIt notes (or “stickies”) and writes down whatever comes to mind from the previous sprint. When they’re done, they all go up and stick them on the whiteboard in the correct zone (hence the term “sticky shotgun”). They use five zones: Start, Stop, Keep, More, and Less.

After the sticky shotgun, the stickies are clustered together into groups of similar issues. As you might expect, there are bound to be a lot of similar issues raised, as everyone has worked on the same user stories within the same team for the last two weeks. Instead of going through each individually, grouping them speeds up the process. To make sure nothing has been misinterpreted, they are always checked with the author to ensure that they’re put together in the correct group. We then briefly discuss the issues and try to identify what needs to be done to solve them; or in the case of the “keep” issues, what was done that makes the item worth keeping.

After the grouping and discussion, everyone is allowed to select their three most important groups. This helps rank the groups in order of importance so the team knows what needs to be addressed first.

This approach seems to provide a few distinct advantages over the way I’ve done retrospectives in the past:

Firstly, it allows a greater degree of participation. Not everyone is comfortable voicing their opinions in a group – particularly when they’re talking about the areas that need improvement. Allowing each person to note their thoughts down independently ensures that their opinions are heard.

Secondly, it speeds up what can be a very time-consuming meeting. It is very likely that a number of people have similar issues. This method allows everyone to voice their opinions without needing hours of everyone’s time. It also clearly identifies those opinions that are shared by a large number of people on the team.

Thirdly, it prevents snowballing. Quite often, one issue sparks off another and another – and before you know it, you’re not really focusing in the right direction. The point of the retrospective is to raise the issues, not to solve them. This approach provides for a relatively short period of problem identification, followed by a short period of problem definition, followed by a final short period of problem prioritisation.

Overall, it was very interesting to see a different slant on retrospectives, as they seem to be the quiet brother/sister in the Scrum family that no one talks about. There are hundreds of discussions over how we write user stories, how we estimate and assign story points to our user stories, how we sell Agile to customers – but I don’t recall seeing as much discussion over retrospectives.

Ultimately, it is the retrospectives that help us to see what we’re doing right or wrong and where we can improve. So it’s definitely a topic worth talking about.


Opposing views on the UML {xor} constraint

During my last lecture week at Oxford, a debate arose over the interpretation of the xor constraint within UML class diagrams. We were divided over two differing opinions, but were unable to reach consensus on which was correct. I’ll put forward the two views and give my opinion on the matter, but would be interested to hear if anyone knows the correct way to interpret this constraint.

This is the class diagram I’ll use as a reference for the opinions explained below. I’ve deliberately used simple names for the classes to avoid misinterpretation, as I’ve found that people often get distracted by the “real world” application of the diagram rather than the semantics of the notation – which is what we’re really trying to understand here.


Opinion A: the xor constraint applies to the association
In this case, we say that the xor constraint applies to the association itself, which means that the multiplicity of the association remains in effect. In other words, one of the following must be true (but not both):

  • EITHER: An object of type A may be associated to zero or more objects of type B
  • OR: An object of type A may be associated to zero or more objects to type C

Opinion B: the xor constraint applies to the link (the association instance)
In this case, we say that the xor constraint applies to the link between the objects, which means that the multiplicity specified on the association is effectively ignored. Regardless of the multiplicity specified on the association, the xor constraint requires that one of the following must be true (but not both):

  • EITHER: An object of type A must be associated to exactly one object of type B
  • OR: An object of type A must be associated to exactly one object of type C

According to UML 2 Toolkit (OMG Press), page 108:

The xor constraint specifies that objects of a class may participate in, at most, one of the associations at a time.

From what I can tell, this appears to support Opinion A above, which is also what I believe. This quote appears to be further reinforced on page 303.

It seems unnecessarily restrictive to apply the xor constraint to the links, as this substantially limits the usefulness of the constraint. In addition, Opinion A allows for the modelling of Opinion B by using the xor constraint with multiplicities of 1. So, given that Opinion A may be used to model a much wider range of possibilities, I don’t see how Opinion B would be very useful. Opinion B also opens the model to miscommunication, as the multiplicities would no longer make sense and should be disallowed when the association is used with the xor constraint.

That said, this is simply my opinion based on what I feel is logical. I don’t actually know for a fact which is correct. I completely understand why holders of Opinion B see it their way – I just don’t see how such strict semantics would be useful in this case.

Does anyone else have a different opinion on this? Or perhaps know the correct answer?


Is that REALLY the tone you want to encourage?

You can tell when I’m on a lecture week at Oxford, because I suddenly have the time to catch up on all the blogging that I intended to do over the past weeks/months!

This is something I stumbled onto when browsing the latest news on LinkedIn Today. Startmate, a startup incubator in Australia, has provided funding for a bunch of companies – in particular, to a company called Chorus. According to TechCrunch:

Chorus lets companies reply to the angriest emails first, decrease risk of bad PR on social networks, and predict future trends and sentiment. The goal: more customer love.

Now, I completely see where they’re coming from… but is that REALLY the kind of customer engagement you want to encourage? Surely, if your customers realise they’re only getting fast responses when they’re angry, they’ll start to only send angry emails. It won’t be long before they start to associate their anger with your customer service.

Having recently dealt with terrible customer service from a “reputable” UK web brand, I have a fairly fresh perspective on this. Admittedly, this is just my own perspective and there’s absolutely no doubt others will act differently. I was quite polite initially, and only got angry when they simply refused to do what I had asked (which was to close my account). It took over 20 minutes for them to get the message, which is why I got angry. If I had known that they would only take me seriously (or would only consider me a priority) when I became angry, I’d simply get there sooner!

I’m sure there’ll be a lot of disagreement over this point – and I’m not a marketing expert – but I think it’s fair to say that you shouldn’t be surprised when your customers hate you purely because they have to in order to get a response from you! I can only imagine how well that would work in a classroom of young children 🙂

I just hope Chorus is intelligent enough to separate polite anger from rudeness.


Certification and Competence

I found it quite interesting reading Martin Fowler’s recent post about the correlation between certification and competence, especially so recently after writing about the changes to some of Oracle’s certifications!

I have to say that I completely agree with his opinion – and I know it’s shared by most (if not all) of my friends. Having earned a number of certifications over the course of my career, I have seen first-hand just how useless so many of them are. I must emphasise that this is not necessarily true of all certifications. However, from my experience, none have proven enough to establish the holder of the certification to be an expert on the subject – and this is really where they fail.

So why did I decide to take on the Java Enterprise Architect certification? Well, I approached my investigation of it with the usual skepticism, but was finally convinced by one major aspect: it is ultimately assessed by a human being! As Fowler points out:

“At the moment the only way you can tell if someone is a good programmer is to find other good programmers to assess their ability.”

I don’t like certifications that are comprised of nothing by MCQ’s – not enough can be tested in a few multiple choice questions to be able to certify someone as mediocre, let alone an expert! Unless the candidate has had to form an opinion and defend it, you haven’t really tested their ability to apply their knowledge and reasoning. You’ve just asked them to regurgitate simple facts. At best, you’ve tested their ability to recognise a solution to a particular problem – but you haven’t established whether, given a blank sheet of paper, they are able to come to the solution themselves.

I’m hoping that the Java Enterprise Architect certification is able to distinguish the competent from the incompetent, but at this stage I’m not sure. I do know one thing though… adding a course attendance requirement does not strengthen the certification. Unfortunately, I feel this is where the “good money-making opportunity” that Fowler mentions comes into play.


Changes to Oracle Certification for Java Architects

I’ve recently completed the first exam towards the Oracle Certified Master, Java EE 5 Enterprise Architect and had aimed to complete the assignment later this year. However, I just happened to go to the Oracle Certification website where I saw that the rules are changing for a few of their certifications.

Quoted from the announcement on their website:

Beginning August 1, 2011, Java Architect, Java Developer, Solaris System Administrator and Solaris Security Administrator certification path requirements will include a new mandatory course attendance requirement.

I can’t say I’m impressed with how quietly they announced this! I haven’t seen anything on the OTN Java site about it, nor have I seen any attempt to make developers aware of this through other channels. I also have not received any emails from them about the changes to a certification track that I am currently working on. Unless I’ve managed to miss the announcement somewhere, I can only guess that they’re trying to sneak this in quietly to get more money out of us. Perhaps I’m being a bit cynical, but I don’t think it’s too far from the truth, given Oracle’s recent track record with the Java community.

So if you’re busy working towards any of these certifications, I suggest you pick up the pace and get it done by the end of July! And here I thought things were starting to settle down.

Fortunately the certification assignment has a very similar focus to my next assignment at Oxford. If all goes well, I might just manage to get it done in time.


SAP Deployment with Ant

A few months ago, I wrote about our continuous integration system and how I’d hooked everything up to automate as much as possible. One of the key components of this suite of tools was a SAP NetWeaver Deployment Plugin for Maven that I wrote. Since then, I have received a few comments and queries about this plugin and have been asked whether it’s something we’re actively developing. I had intended on releasing it as an open source plugin (and may still do so at some stage), but in the meantime I have set up another option that has proven very useful – and that’s what this post is all about.

This time I’ve provided the documented Ant build file at the end of this article, which you are welcome to download and use for your own development.

How does it work?

Within the standard Maven project object model, you have the ability to define artifact repositories within the <distributionManagement /> section of the pom.xml file. Maven uses these settings to deploy the artifact that was generated during the build process – which, in our case, simply deploys them to Nexus. This makes it easy to reference the artifact in other projects and ensures that it is appropriately shared (either internally or publicly, depending on how you have configured it). It’s up to you to decide how and when you deploy to Nexus, but the point is that it is in a shared repository. This should be done for you as part of your release process if you use the Maven Release Plugin, although it can be manually uploaded too (either via Nexus or via the Maven Install Plugin / Maven Deploy Plugin). Now that we have our artifact in Nexus, we can simply download it whenever we need it. We don’t want to rebuild the sources to produce another copy of the already released artifact – we simply want to use the same compiled artifact that we originally produced.

We used Ant to create a build script – modelled on the sample script provided in the SAP NetWeaver CE 7.1 Developer Edition installation – that would allow us to run a deployment from anywhere to anywhere. The standard script provided by SAP is tied to the directory where it lives – which is not very useful, since we really don’t need a full NWCE installation just to fulfill a few dependencies. So we decided to combine one of the best features of Maven (dependency management) with the simplicity and clarity of Ant, and have produced a build script with no local ties. The next few sections will explain what you need to make this work and how to setup your environment (the script also includes some documentation).

Setup

You’ll obviously need to have Ant installed for this to work. In addition to this, you’ll need to download the Maven Tasks for Ant and copy them into your {ANT_HOME}/lib folder. The next thing you need is your own internal Nexus repository manager. This is technically not required, as you could simply install all dependencies locally – although it’s more realistic that you’ll want to use a shared repository for your team / company.

Once you have your Nexus repo manager configured, you need to deploy the SAP deployment JAR files to the 3rd Party release repository. These are not made available in any public repo, so you’ll need to host them yourself. I’ve outlined the details I used to upload the deployment libraries in our system, but you’re free to use whatever GAV parameters you like (so long as you update the Ant build file as well). You can find all of these files in the SAP installation directory – just look in the sample Ant script (in /usr/sap/LCL/J00/j2ee/deployment/scripts) for the relative file locations.

JAR File Maven Artifact Definition
tc~je~dc_ant.jar com.sap.ant:sap-ant-tasks:7.1.1
sap.com~tc~exception~impl.jar com.sap:tc.exception.impl:7.1.1
sap.com~tc~je~clientlib~impl.jar com.sap:tc.je.clientlib.impl:7.1.1
sap.com~tc~logging~java~impl.jar com.sap:tc.logging.java.impl:7.1.1
sap.com~tc~je~deployment14~impl.jar com.sap:tc.je.deployment14.impl:7.1.1
sap.com~tc~bl~jarsap~impl.jar com.sap:tc.bl.jarsap.impl:7.1.1
sap.com~tc~sapxmltoolkit~sapxmltoolkit.jar com.sap:tc.sapxmltoolkit:7.1.1
sap.com~tc~bl~sl~utility~impl.jar com.sap:tc.bl.sl.utility.impl:7.1.1

Note: I’ve used version 7.1.1 for all of these SAP libraries as I’m running NetWeaver CE 7.1 with Enhancement Pack 1 (more on how to check versions here).

When you run Ant with the build file, it will automatically resolve and download the dependencies via Nexus (using the Maven Tasks for Ant). So all you need to do is specify what you want to deploy / undeploy, and then let Ant & Maven do their thing. If in doubt, simply run Ant specifying the build file and the -projecthelp option.

Good luck, and I hope this has helped you!

Download: SAP Deployment with Ant