29 October 2012

Automate JavaScript testing

I like TDD and testing automation and I'm used to consider essential a toolset providing automation for unit, integration, system tests and for code coverage analysis.

Switching from Java to JavaScript development, I tried to put together a Maven-based toolset that resembles the classic automation tools for Java: a testing framework (with a runner which I can run through mvn clean test) and a code coverage tool (which I can for example run through mvn verify, obtaining a graphical coverage report).

The result is this small example of JavaScript library, whose Maven-based automation is achieved through a number of excellent frameworks (Jasmine) and Maven plugins (Jasmine Maven Plugin, Saga JS coverage plugin).

Feel free to checkout my sample and to adjust it to your needs!

29 May 2012

Mocking static methods and the Gateway pattern

A year ago I started to use mocking libraries (e.g., Mockito, EasyMock, ...), both for learning something new and for testing purpose in hopeless cases.
Briefly: such a library makes it possible to dynamically redefine the behaviour (return value, thrown exceptions) of the methods of the class under test, in order to run tests in a controlled environment. It makes it possible even to check behavioural expectations for mock objects, in order to test the Class Under Test's interactions with its collaborators.
A few weeks ago a colleague asked me: "[How] can I mock a static method, eventually using a mock library?".
In detail, he was looking for a way to test a class whose code was using a static CustomerLoginFacade.login(String username, String password) method provided by an external API (an authentication custom API by a customer enterprise).
His code looked as follows:

public class ClassUnderTest {
 public void methodUnderTest(...) {
   // check authentication
   if(CustomerLoginFacade.login(...)) {
  } else {

but customer's authentication provider was not accessible from test environment: so the main (but not the only: test isolation, performances, ...) reason to mock the static login method.

A quick search in the magic mocking libraries world revealed that:
  • EasyMock supports static methods mocking using extensions (e.g, Class Extension, PowerMock)
  • JMock doesn't support static method mocking
  • Mockito (my preferred [Java] mocking library at the moment) doesn't support static method mocking, because Mockito prefers object orientation and dependency injection over static, procedural code that is hard to understand & change (see official FAQ). The same position appears even in a JMock-related discussion. PowerMock provides a Mockito extension that supports static methods mocking.
So, thanks to my colleague, I will analize the more general question "Ho can I handle external / legacy API (e.g., static methods acting as service facade) for testing purposes?". I can identify three different approaches:
  • mocking by library: we can use a mocking library supporting external / legacy API mocking (e.g, class' mocking, static methods' mocking), as discussed earlier
  • mocking by language: we can refer to the features of a dynamically typed programming language to dynamically change external / legacy API implementation / behaviour. E.g., the login problem discussed earlier can be solved in Groovy style, using the features of a language fully integrated with the Java runtime: 
CustomerLoginFacade.metaClass.'static'.login = {
              return true;

Such an approach can be successfully used when CustomerLoginFacade.login's client code is Groovy code, not for old Java client code.
  • Architectural approach: mocking by design. This approach refers to a general principle: hide every external (concrete) API behind an interface (i.e.: coding on interfaces, not on concrete implementation). This principle is commonly knows as dependency inversion principle.
So, we can solve my colleague's problem this way: first, we define a login interface:

public interface MyLoginService {
 public abstract boolean login(final String username, final String password);

Then, we refactor the original methodUnderTest code to use the interface:

public class ClassUnderTest {
  private MyLoginService loginService;
 // Collaborator provided by Constructor injection (see here for
 //  a discussion about injection styles)
 public ClassUnderTest(final LoginService loginService) {
  this.loginService = loginService;
 public void methodUnderTest(...) {
   // check authentication
   if(loginService.login(...)) {
  } else {

So, for testing pourposes, we can simply inject a fake implementation of the MyLoginService interface:

public void myTest() {
 final ClassUnderTest cut = new ClassUnderTest(new FakeLoginService());
 cut.methodUnderTest(..., ...);

where FakeLoginService is simply
public class FakeLoginService implements MyLoginService {
 public boolean login(final String username, final String password) {
  return true;

and the real, pruduction implementation of the interface looks simply like this:

public class RealLoginService implements MyLoginService {
 public boolean login(final String username, final String password) {
  return CustomerLoginFacade.login(username, password);

Ultimately, the interface defines an abstract gateway to the external authentication API: changing the gateway implementation, we can set up a testing environment fully decoupled from real customer' authentication provider
IMHO, i prefer the last mocking approach: it's more object oriented, and after all... my colleague called me once the more OO person I know :-). I find this approach more clean and elegant: it's built only upon common features of programming languages and doesn't refer to external libraries nor testing-oriented dynamic languafe features.
In terms of design, too, I think it's a more readable and more reusable solution to the problem, which allows a clearer identification of responsibilities of the various pieces of code: MyLoginService defines an interface, and every implementation represents a way to implement it (a real-life (i.e.: production) implementation versus the fake one).

However, method mocking (by library or by language, doesn't matter) is in certain, specific situations a very useful technique, too, especially when code that suffers static dependencies (ClassUnderTest in our example) is an example of legacy code, designed with no testing in mind, and is eventually out of developer control.
[Incidentally: the solution adopted by my colleague was just that I have proposed (i.e., mocking by design)]

Credits: thanks to Samuele for giving me cause to analyze such a problem (and for our frequent and ever interesting design-related discussion). Thanks to my wife for hers valuable support in writing in pseudo-English

24 May 2012

Eloquent JavaScript - An opinionate guide to programming

Eloquent JavaScript - An opinionate guide to programming - Marijn Haverbeke, 2011

Interesting and useful guide to Javascript, whose "opinionate" approach can reconcile with this loved-heated language. I found extremely interesting the sections about functional programming and object oriented programming: finally I've found a systematic presentation of OOP in Javascript!

21 May 2012

How to automatically test Java Console

Some weeks ago, during a workroom lesson in University, I've faced a typical TDD-addicted dilemma: how can I test driven develop a console-based Java application?
The main problem is clearly how to automatically interact with the application, which relies to System.in for user input and to System.out for user output.
You can use System.setIn and System.setOut methods, of course: but this is IMHO a dirty solution to the console-interaction testability problem, which can be resolved in a cleaner way referring to the dependency inversion principle, ubiquitous in test driven design: rather than directly referring to concrete System.in and System.out streams (this reference is concrete because it's direct, not because it points to a concrete class: InputStream is really an abstract class), the console-based application should reference to some abstraction that encapsulates standard I/O streams dependency - for example to Scanner (for user input) and PrintStream (for user ouput: again, direct reference to System.out is concrete because it's direct, not because it points to something concrete).
So, application behaviour can be encapsulated into a classe having a constructor like this:
public HelloApp(Scanner scanner, PrintStream out)

The application main method instances such a class and invokes a method thar triggers application logic execution, simply providing Scanner and PrintStream that wrap standard I/O streams:

public static void main(String[] args) {
        Scanner scanner = new Scanner(System.in);
        HelloApp app = new HelloApp(scanner, System.out);

Testing code, however, can provide to HelloApp testing oriented instances of Scanner and PrintStream:

final Scanner scanner = new Scanner("Duke y Goofy y Donald n");
scanner.useDelimiter(" "); 
ByteArrayOutputStream outputBuffer = new ByteArrayOutputStream();
PrintStream out = new PrintStream(outputBuffer); 
final HelloApp app = new HelloApp(scanner, out);
final String output = outputBuffer.toString(); 
// Assertions about outputBuffer content:
assertTrue(output.startsWith("Welcome to HelloApp!")); 

So, we have gracefully decoupled application logic from console-based user interactions, providing a solid framework for automated application testing and, even more satisfying for TDD addicted, for Test Driven Development.

Code repository can be cloned using git:
git clone https://bitbucket.org/pietrom/automatically-testing-the-console.git

16 May 2012

JNDI name duplication problem chez WebSphere

Yesterday a colleague and I have faced a subtle problem deploying a suite of enterprise applications on WebSphere Application Server - version 6.1.
The problem manifested itself with a marshalling error when a webapp called a service exposed as an EJB in the same enterprise application: this was very annoying due to two main aspects:
  1. there was no duplication, between webapp and EJB module, fot the classes involved in the call
  2. the issue seemed to occur randomly: not for all calls (and never for some), not always for the same call (apparently depending to application restart)
After a few hours of stop-and-start-and-read-the-log nightmare, we discovered that at the root of the problem was a JNDI name duplication between EJBs published by two different applications of the suite: these applications are based on the same infrastructural framework, and the framework published some framework services using a fixed JNDI name. This was clearly an error in suite packaging, but... WebSphere did not report in any way this name collision, deployed without errors nor warnings both the applications, and the duplicated JNDI name was associated with an implementation of the service or with the other, depending to starting order of the apps. So:
  1. the marshalling problem appeared when the webapp from an EAR called the service published by the other application
  2. the issue occurred only for the EJB with the duplicated name, not for the others; and coccurred or did not depending to the starting order
The solution was quite simple: changing the JNDI name of  one of the twins services.
But... why the application server did not give any error when deploying an EJB using an already-in-use JNDI name, as do for example JBoss and BEA Weblogic application servers?

Definitely something to remember!

17 April 2012

Quick trick: solving an MDB deploy problem on WebLogic

Quick trick about a problem I've faced deploying a Message Driven Bean on BEA WebLogic: MDB's configuration in ejb-jar.xml contained this snippet:


This configuration worked well on JBoss 4, but causes causes an exception during deployment phase on BEA WebLogic 10:

Caused By: 
org.hibernate.HibernateException: The chosen transaction strategy requires access to the JTA TransactionManager
 at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:371)
 at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1341)
 at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:867)
 at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:669)
 at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132)
 at weblogic.deployment.PersistenceUnitInfoImpl.createEntityManagerFactory(PersistenceUnitInfoImpl.java:355)
 at weblogic.deployment.PersistenceUnitInfoImpl.createEntityManagerFactory(PersistenceUnitInfoImpl.java:333)
 at weblogic.deployment.PersistenceUnitInfoImpl.<init>(PersistenceUnitInfoImpl.java:135)
 at weblogic.deployment.AbstractPersistenceUnitRegistry.storeDescriptors(AbstractPersistenceUnitRegistry.java:336)
 at weblogic.deployment.EarPersistenceUnitRegistry.initialize(EarPersistenceUnitRegistry.java:77)
 at weblogic.application.internal.flow.InitJpaFlow.prepare(InitJpaFlow.java:38)
 at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:1223)
 at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:41)
 at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:367)
 at weblogic.application.internal.EarDeployment.prepare(EarDeployment.java:58)
 at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:154)
 at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:60)
 at weblogic.deploy.internal.targetserver.operations.ActivateOperation.createAndPrepareContainer(ActivateOperation.java:208)
 at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:98)
 at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)
 at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:749)
 at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)
 at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)
 at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:160)
 at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)
 at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)
 at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:47)
 at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
 at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
 at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

The problem was the presence of <acknowledge-mode> tag: this has no effect when <transaction-type> is Container (in this case messages ACK coincides with transaction commit). On JBoss the <acknowledge-mode> tag is ignored, but WebLogic validate configuration and raises an Exception when its presence is meaningless.
Removing the tag solved the problem.

13 April 2012

How-to find a class in a JAR directory using shell scripting

The biggest problems in J2EE applications deployment  come often from classloader hierarchies and potential overlapping between server-provided and application-specific libraries. So, searching classes through collection of JARs is oftwen the main activity in order to identifiy and fix classloader issues.
This is surely a tedious and repetitive task: so, here's a shell script you can use to automate JAR collection traversing and tar command's output analysis to search a pattern, which is provided as script parameter.

Credits: Thanks to sirowain for parameter check and return code related contributions.

# Commonly available under GPL 3 license
# Copyleft Pietro Martinelli - javapeanuts.blogspot.com
if [ -z $1 ]
        echo "Usage: $0 <pattern>"
        echo "tar xf's output will be tested against provided <pattern> in order\
          to select matching JARs"
        exit 1
        for file in $(find . -name "*.jar"); do
                echo "Processing file ${file} ..."
                out=$(jar tf ${file} | grep ${1})
                if [ "${out}" != "" ]
                        echo "  Found '${1}' in JAR file ${file}"
                        jarsFound="${jarsFound} ${file}"
        echo "${jarsFound}"
        echo ""
        echo "Search result:"
        echo "" 
        if [ "${jarsFound}" != "" ]
                echo "${1} found in"
                for file in ${jarsFound}
                        echo "- ${file}"
                echo "${1} not found"
        exit 0

This script is available on github.com:

12 April 2012


grepcode is a very useful web site that allows opensource code reading and navigation in user friendly fashion: it's e.g. very convenient to compare different version of an opensource class to investigate about bugs and their resolution, but to simply navigate through code when no sources' jars in maven repositories are available, too.
And... the search engine look for classes, by name, across all the available packages...

I like it!

12 March 2012

How to clean-up JBoss temporary directories using bash scripting

Repetitive and annoying file system tasks are the natural field of automation through shell scripting - so, here's a script  that can be used to clean up temporary directories created by JBoss Application Server.

You can provide as parameter the name of the node that you want clean up - if launched without parameters, the script cleans up temporary directories in each node under current installation.

Tested on JBoss 4.0.5.GA and JBoss 5.1.0.GA installations.

# Commonly available under GPL 3 license
# Copyleft Pietro Martinelli - javapeanuts.blogspot.com

function cleanTmpDir {
        echo "  Cleaning ${1}/${2}"
        rm -rf "${1}/${2}"


function cleanNode {
        echo "Cleaning \"${1}\" jboss node"
        for tmpDir in data log tmp work
                cleanTmpDir ${1} ${tmpDir}

if [ $# -eq 0 ]
        for dir in $(find server -maxdepth 1 -mindepth 1 -type d)
                cleanNode ${dir}
        if [ -e "server/${1}" ]
                cleanNode "server/${1}"
                echo "${1} is not a subdir of server dir"

Updated: this script is now available on bitbucket.org:

28 February 2012

Never executed - never tested!

Code samples I'll publish in this post are not fakes: they come from real code, released into production.
And they are not only brilliant samples of never tested code: they are samples of never executed code!!! Indeed there are in these code snippets execution path which ever - ever! - fail. Read to believe...

Sample #1 - NullPointerException at each catch execution

MyClass result = null;
try {
    result = callMethod(...);
} catch(Exception e) {    // throws ever NullPointerException...

Sample #2: ArrayIndexOutOfBoundException at each catch execution

try {
    result = callSomeMethod(...);
} catch(Exception e) {
    String[] messages = new String[3];
    messages[0] = ... ;
    messages[1] = ... ;
    messages[2] = ... ;
    // throws ever ArrayIndexOutOfBoundException ...
    messages[3] = ... ; 
    throw new CustomException(messages);

Sample #3: ClassCastException whenever if condition is verified

public class AClass {
    public void aMethod(final Object obj) {
        if(!obj instanceof InterfaceXYZ) {
            final InterfaceXYZ xyz = (InterfaceXYZ)obj;

23 February 2012

Exception management antipatterns - Episode I

Exception management is a tricky skill, and perhaps the most misunderstood topic of Java - and I think not only Java - programming. So, I will try to collect a series of posts about antipatterns in exception management: to provide a reference to anyone interested in the topic, of course, but also to analyze and deepen the issue myself.

My first post on the subject concerns a number of basic antipatterns:
  • Throw all away
  • Empty catch
  • Re-throw without cause
  • Log and rethrow
Throw all away
This is the simplest exception management antipattern we can (and shouldn't!) use, and is the tipical approach used when studying a new language: no exception management! While this can be a legitimate approach in training contexts - I'm learning Java IO API and will initially concentrate on the sunny day path, not on exception management - or in testing contexts - I'm writing a test that doesn't cover exception handling: another test will cover it! - it's not admittable in normal, production code... simply because without exception management one exception causes program termination.

public void aMethod() throws Exception {

public void callerMethod() throws Exception {

public void callersCallerMethod() throws Exception {


public static void main(String[] args) throws Exception {

etc. etc. etc. ... when doSomething method raises an exception, it goes up through the entire stacktrace and causes program termination. Oh!

Empty catch
This is the antipattern which causes more headaches, as it hides exceptions instead of really managing them: unlike the previous antipattern, Empty catch makes very difficult bug finding and fixing, since it gives no information about the exception taht has occurred: the exception simple disappears, the current operation is not completed successfully, the user is not given any warning about the unespected application behaviour, and the developer has no useful information for analyzing the issue: great!

public void aMethod() {
   try {
   catch(Exception e) {

Re-throw without cause
Another exception management antipattern I've often seen at work is the Re-throw without cause one: exceptions are catched, exception handling code constructs another, tipically custom, exception and throw it... without reference to the initial exception.
This can be useful in some specific contexts, such as code in layers that are called using a remote protocol: if you do not want to expose your internal exception through some type of serialization mechanism (RMI, SOAP, ...), you can throw another interface (known to the client) exception: this newly created exception exposes the problem in the client language.
In regular code, however, creating and throwing and exception without reference to the cause exception is something like Empty catch: it hides the real problem and causes headaches, both to users and debuggers, since it does not give sufficient information to analyze the problem.

public void aMethod() throws ABCException {
   try {
   catch(XYZException e) {
      throw new ABCException("Error!");

Log and rethrow
This antipattern refers to both exception and logging management: it consists in catching an exception for the only purpose of logging it, and then rethrow it (possibly wrapped in another). This is an antipattern because it causes a stack-trace pollution in log file, wich makes more difficult to read and interpret it. Try to search debugging information in logs of an application which common exception management approach is Log and rethrow: you'll find a series of repeated stacktraces, one for each exception; look for significant informations in such a mess is more and more difficult than look for them in a repetition-free log file. Such an approach increases log size without adding significant information, with the effect of diluting thesignificant content.

public void aMethod() throws AnException {
   try {
   } catch(AnException e)  {
      logger.log("Error!", e);
      throw e;

public void bMethod() throws AnException {
   try {
   } catch(AnException e)  {
      logger.log("Error!", e);
      throw e;
public void cMethod() throws AnException {
   try {
   } catch(AnException e)  {
      logger.log("Error!", e);
      throw e;


public void dMethod() throws AnException {
   try {
   } catch(AnException e)  {
      logger.log("Error!", e);


In this example the AnException is logged four times before it is actually handled, and the log is four times more verbose than necessary - and four time less readable...

21 February 2012

Book review: Extreme Programming Explained, by Kent Beck

Complete and easy to read introduction to values, principles and practices which Extreme Programming is based on, wrote by the "father" of XP. A book every modern software engineer should read. 

Empty JSP template

Here an empty JSP template, providing basic settings for UTF-8 character encoding configuration.
<%@page contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
      <title>Your page title here!</title>
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
   <body>Your markup here!</body>

17 February 2012

Reinventing the wheel: Collection.size()

 Original way to reinvent the wheel - by an examination of "Programming Fundamentals"
public class MyContainer {
   private final Collection myBag;

   public int getBagSize() {
    int j = 0;
    for(int i = 0; i < this.myBag.size(); i++) {
    return j;

15 February 2012

Improper inheritance and light-hearted downcast

And here is maybe the worst design I've ever seen - directly from production-released code, of course...

Inheritance is a key feature in object-oriented programming languages, providing support for both specialization and flexible extensibility. But... using inheritance just to use it can drive to horrible design cases, where distinction between superclass/interfaces and different subclasses is improper and extensibility is actually impossible. This is the case of a real-life production code I was lucky enough to see:

public interface InterfaceX {
   public abstract void aMethodX();

public interface InterfaceY {
   public abstract String aMethodY(final String s);
Class FatImplementation implements both InterfaceX and InterfaceY:
public class FatImplementation implements InterfaceX, InterfaceY {
   public void aMethodX() {
   public String aMethodY(final String s) {
      return ...;
And... Ladies and Gentlemen... the crazy client class, whose method is casting to InterfaceX an InterfaceY reference, relying on the fact that FatImplementation implements both interfaces:

public class CrazyClass {
 public void theCrazyCastingMethod(final InterfaceY y) {

This is an improper use of abstraction - inheritance programming model, since the cast succeeds only with a specific implementation (FatImplementation) of given interface and polymorphism is actually not possible.
... ... ... poorly and foolishly designed, isn't it? And... what if I said that InterfaceX and InterfaceY, in the original code, where StatefulXXX and StatelessXXX? So... FatImplementation was stateful and stateless at the same time! AAAAAAHHHH!

13 February 2012

Minimal log4j.xml configuration file template

Here a log4j.xml file template, providing a minimal log4j's configuration that enables console output and DEBUG logging level for a given base package - logging level is set to WARN for remaining packages.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
 <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
  <param name="Target" value="System.out" />
  <param name="Threshold" value="DEBUG" />
  <layout class="org.apache.log4j.PatternLayout">
   <param name="ConversionPattern"
    value="%d{ABSOLUTE} %-5p [%c{1}] %m%n" />
 <!-- Insert your own base-package HERE! -->
 <logger name="org.amicofragile">
  <level value="DEBUG" />
  <priority value="WARN" />
  <appender-ref ref="CONSOLE" />
Official log4j.xml documentation 

11 February 2012

Singleton, testing and dependency inversion

Singleton: pattern or antipattern?
Accordingly to Wikipedia, Singleton is a creational pattern, used to implement the mathematical concept of a singleton, by restricting the instantiation of a class to one object. So, it's a pattern!
But it's an antipattern, too, especially from the point of view of testing: it's a (simple) variant of Service Locator testability antipattern.

As states Jens Schauder in his post Fixing the Singleton, there are two key characteristic of the (classic implementation of) singleton:
  • There can be only a single instance of the class developed as singleton
  • There is a central, global acces point to the singleton instance
Alhough the first one is the main - if not the only - reason for use of Singleton pattern, it comes almost alway with the second one. But... while the first one is a conceptual featur of the pattern, the second is nothing but an implementation detail!

We can therefore speak of conceptual Singleton, when we have a class that can be instantiated only once in the application lifecycle, and syntactic Singleton, with reference to the traditional GoF's implementation. 
Well, my idea is that you can think of two different basic implementation strategies for conceptual Singletons:
  • Singleton by syntax - traditional GoF's implementation, through private static instance and public static (and then: global) accessor
  • Singleton by contract / application - implementation of the concept of "single class instance" without syntactic constraints: application code takes care of respect the contract of "single instance". Tipically, application infrastructure responsible for creating object and setting up collaborators references instantiates the Singleton class only once and passes created instance to modules interested in its use: this is substantially an application of Dependency Inversion Principle, and can be implemented through Inversion of Control frameworks like Spring and Google-Guice (for a good discussion about self implemented dependency injection, see this article).
First approach suffers the problem suggested initially: there is a global state, publicly accessible, liberally referenced everywhere in the client code - and global state is evil!
The second one, instead, provides a conceptual Singleton instance without referring to syntactical constraints: application lifecycle infrastructure ensures unicity of the Singleton class instance.

In code:
  • Singleton by syntax:
    package singleton;

    public class UIDGenerator {
      private static final UIDGenerator INSTANCE = new UIDGenerator();

      public static UIDGenerator getInstance() {
        return INSTANCE;


      private UIDGenerator() {

      public String nextId() {
        return ...;

    Client code:

    public void foo() {
      String newId = UIDGenerator.getInstance().nextId();
      // Use newId
    public void bar() {
      Account account = new Account(UIDGenerator.getInstance().nextId());
      // Use newly created Account
    This the classical GoF's implementation of pattern Singleton: private constructor and final static INSTANCE ensure instance unicity, public static accessor provides global access to singleton instance.
  • Singleton by contract:
    package singleton;

    public interface UIDProvider {
      public abstract String nextUid();

    Client code:

    package singleton;

    public class AccountManager {
      private final UIDProvider uidProvider;

      public AccountManager(UIDProvider uidProvider) {
        this.uidProvider = uidProvider;
      public void bar() {
        Account account = new Account(uidProvider.nextUid());
        // Use newly created Account
In the second implementation we define an interface for UID generation: application infrastructure (i.e.. in most cases. an Inversion of Control container, like Spring) will ensure that a single instance of a class implementing UIDProvider is passed whenever it's necessary.
This way we can obtain the conceptual part of the pattern without the syntactical one: there is no public static context accessed everywhere, and a reference to the singleton is indeed injected into modules that need it. So, unlike in the first case, it's possibile to mock UIDProvider for testing purposes (for example because real implementation is time expensive, or there is a fee for every use, or simply because unit testing is isolation testing and we need to make assumptions on generated uid in testing code):

public class AccountManagerTest {
  public void testMethod() {
    AccountManager underTest = new AccountManager(new FixedUIDProvider());
    // Exercises underTest

This is IMHO a more, more (more!) powerful approach for implementing singleton than the classic one: can you figure out how to mock UIDGenerator.getInstance().nextId() calls?
The basic idea behind this proposal is a variation of single responsibility principle: classic singleton implementation drives to classes that implement two responsibilities: a functional responsibility - what the class do - and a structural responibility - how is the class instantiated and the instance accessed. Inversion of Control containers, and more generally the idea of Dependency Inversion, support separation of responsibilities by divide functional code and object graph lifecycle management code: this leads to clearer, simpler design, that decouples singleton implementations from client modules and supports testing in a more effective way.

09 February 2012

Book review: Implementation Patterns, by Kent Beck

Implementation Patterns is a book of 2007, written by Kent Beck - software consultant, one of the original signatories of the Agile Manifesto in 2001, and author of, among other things, JUnit, Extreme Programming Explained, TDD By Example.
Implementation Patterns is basically a collection of coding-level patterns, whose purpose is to address values defined in the initial theory of programming: communication (readability), simplicity, and flexibility.
These patterns are described with usual Beck's clear and straightforward style, and are common-sense recipes that every professional should know and recognize themselves. From this point of view, so, this is not an indispensable book: given a bit of common sens, its content is almost trivial, but... there are so many developers without common sense, out there... that such a book, and books like this, like the ultra famous Clean Code by Uncle Bob, are sadly necessary...
I think this is a book every developer should read: even if only for recognize behaviors that already adopts in everiday coding, and that the boss or the management consider just academical...

Uh, and: very interesting the chapter about applying implementation patterns to design-framework-for-evolution...

08 February 2012

Configuration architectural antipatterns

I've got the flu, these days: and in fever's drowsiness has surfaced the memory of an application I've worked on six years ago.
It was an aged application, developed by a former employee using Visual Basic 6 - uh! oh!
I was doing only fixes and small enhancements on that application, but I'll remember it forever due to its bright and genial configuration-relating design.

That application's configuration, as was the case also for other applications by the same author, was "designed" around the Multiple Configuration Points conceptual antipattern: it wasn't stored in a single, well defined, simple accessible place, but in multiple, differently implemented respositories:
  • part of the configuration was read from an "ini" file: the file was processed without using some kind of library, but reading the raw input and counting rows: at row #X, parameter Y - oh, yeah!
  • the remainder part of configuration cames from a MS Access File (may god forgive him! - if a god exists) and was accessed using OLE DB support and SQL queries (this part of the configuration was modifiable through the application itself)
So, in one application, and only with regard to configuration management, you can list several architectural antipatterns:
  1. Multiple Configuration Points: configuration spawned in multiple, heterogeneous repositories
  2. Reinventing The Wheel: "ini" file read through home-made procedure
  3. Brittle configuration: formatting-depandant "ini" file semantics
  4. Tightly coupling to external libraries versions: MDB configuration file could be read without errors  only when the MS Access' version on the host machine was the version for which the application was compiled - otherwise, the application wasn't ever able to start. And if the MDB file was modified using a most recent version, it became unreadable for the application...
And - final masterpiece: that application, which used relational DB for (partial) configuration management, did not use any type of database for business data storage and inter-application communication purposes: data storage was performed writing sequental text files with fixed-length records, and applications interested in data interchange read the same files (code reading and writing records was naturally duplicated between applications, not extracted in a common library).

I've got the flu, these days... but even conceive of such an architecture must have required a great fever... isn't it?