Assessing the Single Responsibility Principle with LCOM4 and Sonar 2.0

A while ago I set up Sonar as part of my team’s continuous integration system. Every time I use it, I’m amazed at what a fantastic system it is. It’s incredible that such a seemingly small piece of software can be of such great value!

One of my biggest combined strengths/weaknesses is that I’m a perfectionist in my work. This works to my advantage in that I simply refuse to deliver bad quality work – but it also works to my disadvantage in that I sometimes fix things that aren’t broken. Don’t get me wrong here – I’m very much in favour of refactoring. I’m just conscious of identifying when refactoring has degenerated into needless changes without adding any real value.

So how does this tie into Sonar 2.0, you ask? Well, I’ve recently upgraded to the latest version and was looking into the cool new features it brings – in particular, the LCOM4 measures. This is excellent when used appropriately, but can also be the cause of the dangerous “refactoring” mentioned before.

For those of you who aren’t familiar with LCOM4, here’s the simple definition (as quoted from the Sonar Metric Definitions page):

LCOM4 measures the number of “connected components” in a class. A connected component is a set of related methods and fields. There should be only one such component in each class. If there are 2 or more components, the class should be split into so many smaller classes.

This is a great measure of the complexity of your code. Simply put, it helps you identify when your classes fail to meet the Single Responsibility Principle (or are heading in that direction). Well – at least, that’s the basic idea. The problem comes in when you take the definition as gospel and forget that you need to do some interpretation of the metric yourself.

By ignoring the rule altogether, you run the risk of developing a product that grows in complexity and therefore becomes harder to maintain, troubleshoot, extend, etc. But by accepting the rule without question, you run the risk of adding complexity to an already decent design. You need to find the middle ground for your own situation.

To put this into a real world example, consider the design patterns that we all use in our software designs – in particular, the Abstract Factory pattern. When analysed according to LCOM4 rules, the Abstract Factory is likely to be a high scorer. I’ve seen this in one of the Abstract Factories in one of my own projects, but I’m not about to go changing my design or implementation to reduce the LCOM4 score for this particular class just yet. That’s not to say it can’t be improved – I just need to be clear about how reducing the LCOM4 score is going to improve my code. After all, the metrics attempt to expose problems that should be addressed in the design process in order to produce better code – not the code designed to make better metrics. While this process is cyclical to an extent… the desired outcome should be better code, not necessarily better metrics.

Of course, there are good ways of reducing the LCOM4 score of an Abstract Factory through refactoring, but my point is that you must intelligently assess what the problem is and how to solve it before simply diving in and making changes.

Having seen the benefits of static code analysis performed regularly as part of a continuous integration process, I will never go back. Sonar is an amazing product, and I definitely recommend looking into it if you’re not already using it!

EDIT: Check out this blog post if you want to know more about what’s new in Sonar 2.0

Advertisements