Monthly Archives: November 2013

Semantic versioning is a simplification

Semantic version seems to become a de facto standard in the industry. This is very good since the versioning of software packages is very important when we want to compare two versions and decide which one to use. Without semantic versioning you have to consult the documentation to decide if you can upgrade the use library from version 1.34 to 1.36. When you rely on semantic versioning the answer is simple (in theory): yes, you can. Version 1.36 has to be backward compatible with version 1.34 Decision is as easy as saying it out loud and then the time saved can be used for some more precious work that needs brain. In practice there may be some problem, but in that case the basic assumption, semver was used, turns out to be false.

How can we tell that 1.36 is backward compatible with version 1.34? That comes from the definition of semantic versioning. The M.m.p scheme, Major, minor, patch, says that Major versions introduce incompatible change, minor compatible change and patch is what the name says: bug fix.

You can notice that M and m talks about the specification of the library, while p about the implementation . There is nothing new about this. Java SE documentation contains a chapter about product versioning. The chapter talks about how to version packages, and recommends to use separate specification and implementation versions. The versions are defined in the manifest file in the JAR file that contains the package. Specification version is recommended to have the form major.minor.micro. There is no recommendation for the implementation version. A sample manifest file looks:

Manifest-version: 1.0

Name: java/util/
Specification-Title: "Java Utility Classes"
Specification-Version: "1.2"
Specification-Vendor: "Sun Microsystems Inc."
Package-Title: "java.util"
Package-Version: "build57"
Package-Vendor: "Sun Microsystems. Inc."

Semantic versioning as an approach is compatible with this. Semver does not support micro versions for the specification, only major and minor, but after all the ORACLE/SUN documentation itself does not say too much about the micro: major version numbers identify significant functional changes, minor version numbers identify smaller extensions to the functionality, micro versions are even finer grained versions. Even finer grained. That is all about it. Do we need that? My suggestion is: not. And this is the implicit suggestion of semver.

Einstein allegedly said: Everything should be made as simple as possible, but not simpler.

This is the case of semantic versioning and Java package versioning. As time proved, Java packaging versioning is simply not simple. It is too complex to be practical and real life versioning problems do not need this level of complexity. Other versioning approaches may be too simple and at the end of the day semver may just fit the purpose.

Maven is a huge supporter of semantic versioning and the archiver plugin helps you to include the versions into the manifest file so that your package is compatible with the Java SE recommendation and Java RT can query the version of a package. The version of your library is specified in the pom.xml file and this version is used when creating the manifest.

Implementation-Title: ${project.name}
Implementation-Version: ${project.version}
Implementation-Vendor-Id: ${project.groupId}
Implementation-Vendor: ${project.organization.name}

Specification-Title: ${project.name}
Specification-Version: ${project.version}
Specification-Vendor: ${project.organization.name}

Interesting that both the specification and the implementation version contains the whole package version. The semantic version compatible solution would be to include only the M.m version in the specification version and either the whole version number or only the rest for the implementation version.

Cheap programming languages

Twitter: @chesterbr 2012.05.23. 17:24
Choosing a language because it has cheaper developers is like building your house with Lego so you can hire anyone as a construction worker.

This is an old tweet that recently came into my eyes via a repost. I read it and I was nodding: very true. But on the second thought I just started to think about the practice that I see when large companies choose technology.

Recently I was part of a decision to go for JavaScript and some native JS framework on the client instead of GWT. I was busy warning the management that the huge load of JS developers available on moderate price is guarantee for failure. You can buy just as much JS developer as you want for low price, but they are the low quality producing guys. Low price, low quality. You have to hire the JavaScript developers who have higher price tags. (Not for the price tag alone of course.)

If you want acceptable quality you have to pay for it. In case of JS, because the language gives you more freedom the price may even be higher. Partially on price per hour of experts who are really good, and the total number of hours may be more as well. If you do not take that serious you will face the sad truth: the more freedom you have the more trouble you can get into, unless you have self discipline. And self discipline comes at a price tag. Pay nuts and get only the monkeys working for you.

And then I was going on thinking. What technologies did the construction workers use when my house was renovated ten years ago? Was it state of the art? No it was not. Why? Because these people were just not able to handle modern technology. The electrician (one of the cleverer guys, still illiterate, and I mean literally: he could not write or read) fixed tubes on the wall for the wires to run in and boxes for the outlets. Then the bricklayers came and put a cover of mortar. The electrician had to find and dig out the outlets. Bricklayers just did not care the work of the other person. But the technology was prepared for that. This is lego. When you can not adjust the quality of people, you adjust the technology. We do the building of your house from lego. Not the toy type, but the same principles. The toy lego is adjusted to the brain capabilities of a child, the house construction lego is adjusted to the construction workers. Not a big difference.

The same is true for software. Many years ago a German bank IT person was eMailing me questions and explained they considered to replace a few hundred of Perl scripts to their ScriptBasic equivalent. After that the maintenance could be just cheaper. Perl programmers were scarce that time and pricey. ScriptBasic is on the other hand is just BASIC. Everybody knows BASIC just as well as the little girl from Jurassic park knows unix. (This is an extreme example though.)

JavaScript is the dominant language on the client side and bites into the server side as well. PHP is wide spread in the web arena. Why? Because they are the lego types compared to languages like Java, Python, Ruby, Scala, Haskel.

Looking again at the quote we started the article with. Choosing a language because it has cheaper developers is like building your house with Lego so you can hire anyone as a construction worker. Looking at a different angle, and without exaggeration: yes it is exactly. We choose a cheaper language so we can hire cheaper (still just not anyone) as a construction worker. Very true. And that is really, what IT managers really do. Even though they know they will have something built of lego.

Just with a special spice added to it: quality control.

Creating Immutable Objects Run-Time

Java supports immutable variables, in the form of final modifier for fields and local variables, but it does not support the immutability of objects on the language level. There are design patterns that aim to distinguish mutator and query methods in objects, but the standard library and libraries from different sources may not support the feature.

Using immutable objects makes the code safer because they reveal programming mistakes manifesting run time sooner. This is the so called “fail-fast” principle that you can certainly understand and appreciate if you came from the C or C++ programming field to Java. If you can have an immutable version of an object and you pass it on to a library (be it external or your own) an exception occurs as soon as the code tries to call any method that is a mutator. Having no immutable version the error such a call causes manifests much later when the program fails with the modified and thus presumably inconsistent-state object.

Because of these advantages of immutable objects there are libraries that deliver immutability for some special cases. The most known and most widely used example is the Guava immutable collection library from Google. This creates immutable versions for collections. However collections are not the total world of Java classes.

When you have the code under your own control you can split your interfaces to a query and a mutator part, the mutator eventually extending the query interface. The implementation can also be done in two classes: a query class implementing the query interface, and a mutator class extending the query class implementing the mutator interface (that also includes the query interface functions). When you want an immutable version of an object you cast it and pass on using the query interface. This is, however not 100% security. The library can, by sheer ignorance of the code or by mistake, cast the object back and mutate the object state. The fool proof solution is to implement the query interface in a class that is set up with a reference to mutable object and implementing delegation to all methods defined in the query interface. Though this is cumbersome to maintain such code in Java in case of numerous and huge classes the solution is generally simple and straightforward. You can even generate the delegating query implementation (extending the mutable class) when the query/mutator interfaces, and class implementations are not separated.

The project Immutator delivers this functionality during run-time. Using the library you can create a delegating proxy class during run-time that will extend the mutator class and will pass the method calls to the original object when the method is considered query but throw a runtime exception when the method is considered to be a mutator. The use of the class is very simple, all you have to do is to call a static method of the Immutable class:

MyMutatorClass proxy = Immutable.of(mutableObject);

The generated proxy will belong to a class that extends the original class mutableObject belongs to, therefore you can pass along proxy to any code where you would pass the mutableObject but you do not want the code to alter the state of the object.

How does the library know which methods are query and which methods are mutators? The library immutator in this simple case (there are more complex calls if the simple case is not sufficient) assumes that any method that is void is also a mutator, and any method that returns some value is a query method.

To support the ever increasing popularity of fluent api the call can be written in the form:

MyMutatorClass proxy = Immutable.of.fluent(mutableObject);

in which case any method that returns a value compatible with the class of the argument is also considered to be a mutator method.

If even this functionality does not describe the behavior of the class to proxy then the general form of the call is:

MyMutatorClass proxy = Immutable.of.using(Query.class).of(mutableObject);

which believes that any method defined in the interface Query is a query and the methods that do not present in the interface Query are mutators. Using this form an query proxy can be created for any objects.

This is nice and interesting. Having said all that there are some limitations in the implementation of the library that partially come from the Java language and from the available JDK.

You can not declare any final method as mutator method. The reason for it is that the generated proxy class has to extend the original class so that the proxy object can be used at the place of the original object. It can not, however override the final methods. Final methods are actually not proxied, but execution is passed directly to the original method. This is how Java works.

The proxy object is created in Java source and compiled during run time. This may be slower than, for example using cglib that uses the asm package and generates byte-code directly. On the other hand the library may be more resilient to Java version changes and it is easier to have look at the internal working of the library and the proxy.

Last, but not least the library uses some unsafe package calls (google that if you need), that may not work on all platforms. This is needed to create the instance of the proxy object. Since the proxy class is the extension of the original class creating a proxy object the “normal way” would implicitly invoke the constructor of the extended class. This may not be a problem, but in some cases, when the constructor does some heavy duty work, this may be.

Knowing all those incorporating the library into your application is very simple. Since the com.javax0 libraries are stored in Sonatype repository all you have to do is inserts the library as a dependency into your pom.xml file as

                         <dependency>
                             <groupId>com.javax0</groupId>
                             <artifactId>immutator</artifactId>
                             <version>1.0.0</version>
                         </dependency>

and stay tuned for upcoming releases.

Defining constants in an interface: the good pattern

In a previous post I analyzed a bit the constant interface pattern and I got to the conclusion that there is nothing horrible defining constants in an interface so long as long there is no any class that implements the interface.

The problem is that you may implement the interface. The reason to do that may be sheer ignorance or just simple mistake.

The first and easiest solution is to use @interface keyword instead of interface. This will define an annotation interface that can be implemented by a class but it requires the definition of the method

@Override
public Class<? extends Annotation> annotationType() {
    return null;
}

If this does not stop somebody implementing the interface “accidentally” then nothing is.

Also since this is not a usual practice to implement such an interface Eclipse will not offer the interface after the implements keyword for completion.

My personal taste however is not compatible with this approach. This is technically possible, but I consider this rather to be an entry in some weird obfuscated code contest than production code.

The solution I prefer over the previous one is a nested structure. The outer element is a class that has an interface and a class inside. The interface is private thus you can not implement it outside even if you are ignorant. This interface defines the constants. Since the interface is private the constants can not be accessed from outside directly, but we overcome this obstacle. Along with the interface there is a final and public member class. This class implements the interface (sorry for the purists) and contains a private default constructor. But this is all it does. The template code for this looks something like:

public class ConstantClass {
  private interface ConstantInterface {
    int a = 13;
  }
  
  public final class Constants implements ConstantInterface {
      private Constants(){}
  }
}

Since the class Constants is final it can not be extended, and since this is the only class for which ConstantInterface is visible there can be no other classes that implement the interface. This ensures that the constant interface pattern step number 2 (implementing the interface by a class) can not accidentally be done and thus the interface can not leak into the definition of any class except the very one Constants class.

The use of the constants is the same as they were defined directly in the utility class.

If you compare this pattern to the pattern that is using a utility class to define constants you see that this pattern does not require you to use the keywords public static final in front of each constant. This is a bit more error prone as you can not forget the keywords. On the other hand this pattern is more complex.

There is also an advantage of this pattern that shines in the unlikely case when there are many constants you want to structure and group into different groups. You can have many private member interfaces each defining one set of the constants and then many member classes that can implement one or more of the interfaces. This can give you a very structured way to define your constants. If you have many. Which, to be honest, is not likely to happen. If you only have a few constants then simply go on with the good old utility class solution. Using a pattern should make your code simpler and not more complex.