Java Deep

Pure Java, what else

Random Ideas about Code Style

Some of the sentences of this article are ironic. Others are to be taken serious. It is up to the reader to separate them. Start with these sentences.

How long should a method be in Java?

This is a question I ask many times during interviews. There is no one best answer. There are programming styles and different styles are just different and many can be ok. I absolutely accept somebody saying that a method should be as short as possible, but I can also accept 20 to 30 lines methods. Above 30 lines I would be a bit reluctant.

“When I wrote this code only God and I understood it. Now only God does.”
Quote from unknown programmer. Last quarter of the XX. century.

The most important thing is that the code is readable. When you write the code you understand it. At least you think you understand what you wanted to code. What you actually coded may be a different story. And here comes the importance of readability as opposed to writeablity.

When you refactor a code containing some long method and you split up the method into many small methods you actually create a tree structure from a linear code. Instead of having one line after the other you create small methods and move the actual commands into those. After that the small methods are invoked from a higher level. Why does it make the code more readable?

First of all, because each method will have a name. That is what methods have and in Java we love camel cased talking names.

private void pureFactoryServiceImplementationIncomnigDtoInvoker(IncomingDto incomingDto){
  incomingDto.invoke();
}

But why is it any better than inlining the code and using comments?

// pure factory service implementation incoming dto invoker
incomingDto.invoke();

Probably that is because you have to type pureFactoryServiceImplementationIncomnigDtoInvoker twice? I know you will not type it twice. You will copy paste it or use some IDE auto-complete feature and for that reason the type replacing ‘Incoming’ to ‘Incomnig’ does not really matter.

When you split up the code into small methods the names are a form of comment.

Very much like what we do in unit tests using JUnit 4.0 or later. Old versions had to start the test methods with the literal test... but that was not a good idea. It was discovered long time ago. (I just wonder when Go will get there.) These days Groovy (and especially spock) lets us use whole sentences with spaces and new lines as method names in unit tests. But those method names fortunately should not be typed twice. They are listed and invoked by Junit via reflection and thus they really are what they really are: documentation.

So then the question still is: Why is tree structure better than linear?

Probably that is how our brains work. We look at a method and see that there are two-three method calls in that. There can be a simple branch or loop structure in it, perhaps one nested to the other but not much deeper than that. It is simple and if method names are selected well (I mean really in a good, meaningful and talking way), they are easy to understand, easy to read.

The we can, using the navigational aid of the IDE go to the methods and we can concentrate on the limited context of the method we are looking at. There is a rough rule:

You should be able to understand what a method does in 15 seconds.

If you stare at the method longer and you still have no idea what the method does it means it is too complex. Some people are better apprehending the structure of the code, others are challanged in that. I am in the latter group, so when I review code I many times prefer smaller and simpler methods. I refuse the code to be merged or I refactor it myself depending on the role, the actual task I perform. Juniors I work with think that I am strict and picky. The truth is I am slow. The complexity of the code should be compatible with the weakest chain: any one of the team (including imaginable future maintainers of the next coming 20 years till the code is finally deleted from production) should understand and maintain the code easily.

Many times looking at git history I see refactoring ping-pong. For example the method

Result getFrom(SomeInput someInput){
  Result result = null;
  if( someInput != null ){
    result = someInput.get();
  }
  return result;
}

is refactored to

Result getFrom(SomeInput someInput){
  final Result result;
  if( someInput == null ){
    result = null;
  }else{
    result = someInput.get();
  }
  return result;
}

and later the other way around.

One is shorter, while the other one is more declarative. Is the repetitive refactoring back and forth a problem? Most probably is, but not for sure. If it happens only a few times and by different people then this is not something to worry about too much. When the code gets refactored the developer feels the code more attached to him/herself. A more “it is my code” feeling, which is important. Even though a good developer is not afraid to touch and modify any code. (what could happen? test fail? so what? test? what test?) Note that not all developers are good developers. But what is a good developer after all? It is relative. There are better developers and there are not so good. If you see only good developers who are better than you, then probably you are lucky. Or not.

Implementing an annotation interface


Using annotation is every day task for a Java developer. If nothing else simple @Override annotation should ring the bell. Creating annotations is a bit more complex. Using the “home made” annotations during run-time via reflection or creating a compile time invoked annotation processor is again one level of complexity. But we rarely “implement” an annotation interface. Somebody secretly, behind the scenes certainly does for us.

When we have an annotation

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface AnnoWithDefMethod {
    String value() default "default value string";
}

then a class annotated with this annotation

@AnnoWithDefMethod("my default value")
public class AnnotatedClass {
}

and finally we when get the annotation during runtime executing

AnnoWithDefMethod awdm = AnnotatedClass.class.getAnnotation(AnnoWithDefMethod.class);

then what do we get into the variable awdm? It is an object. Objects are instances of classes, not interfaces. Which means that somebody under the hood of the Java runtime has “implemented” the annotation interface. We can even print out features of the object:

        System.out.println(awdm.value());
        System.out.println(Integer.toHexString(System.identityHashCode(awdm)));
        System.out.println(awdm.getClass());
        System.out.println(awdm.annotationType());
        for (Method m : awdm.getClass().getDeclaredMethods()) {
            System.out.println(m.getName());
        }

to get a result something like

my default value
60e53b93
class com.sun.proxy.$Proxy1
interface AnnoWithDefMethod
value
equals
toString
hashCode
annotationType

So we do not need to implement an annotation interface but we can if we wanted. But why would we want that? So far I have met one situation where that was the solution: configuring guice dependency injection.

Guice is the DI container of Google. The configuration of the binding is given as Java code in a declarative manner as it is described on the documentation page. You can bind a type to an implementation simply declaring

bind(TransactionLog.class).to(DatabaseTransactionLog.class);

so that all TransactionLog instance injected will be of DatabaseTransactionLog. If you want to have different imlpementation injected to different fields in your code you should some way signal it to Guice, for example creating an annotation, putting the annotation on the field, or on the constructor argument and declare the

bind(CreditCardProcessor.class)
        .annotatedWith(PayPal.class)
        .to(PayPalCreditCardProcessor.class);

This requires PayPal to be an annotation interface and you are required to write an new annotation interface acompaniing each CreditCardProcessor implementation or even more so that you can signal and separate the implementation type in the binding configuration. This may be an overkill, just having too many annotation classes.

Instead of that you can also use names. You can annotate the injection target with the annotation @Named("CheckoutPorcessing") and configure the binding

bind(CreditCardProcessor.class)
        .annotatedWith(Names.named("CheckoutProcessing"))
        .to(CheckoutCreditCardProcessor.class);

This is a tehnique that is well known and widely used in DI containers. You specify the type (interface), you create the implementations and finally you define the binding type using names. There is no problem with this, except that it is hard to notice when you type porcessing instead of processing. Such a mistake remains hidden until the binding (run-time) fails. You can not simply use a final static String to hold the actual value because it can not be used as the annotation parameter. You could use such a constant field in the binding definition but it is still duplication.

The idea is to use something else instead of String. Something that is checked by the compiler. The obvious choice is to use a class. To implement that the code can be created learning from the code of NamedImpl, which is a class implementing the annotation interface. The code is something like this (Note: Klass is the annotation interface not listed here.):

class KlassImpl implements Klass {
    Class<? extends Annotation> annotationType() {
        return Klass.class
    }
    static Klass klass(Class value){
        return new KlassImpl(value: value)
    }
    public boolean equals(Object o) {
        if(!(o instanceof Klass)) {
            return false;
        }
        Klass other = (Klass)o;
        return this.value.equals(other.value());
    }
    public int hashCode() {
        return 127 * "value".hashCode() ^ value.hashCode();
    }
 
     Class value
    @Override
    Class value() {
        return value
    }
}

The actual binding will look something like

  @Inject
  public RealBillingService(@Klass(CheckoutProcessing.class) CreditCardProcessor processor,
      TransactionLog transactionLog) {
    ...
  }
 
    bind(CreditCardProcessor.class)
        .annotatedWith(Klass.klass(CheckoutProcessing.class))
        .to(CheckoutCreditCardProcessor.class);

In this case any typo is likely to be discovered by the compiler. What happens actually behind the scenes, and why were we requested to implement the annotation interface?

When the binding is configured we provide an object. Calling Klass.klass(CheckoutProcessing.class) will create an instance of KlassImpl and when Guice tries to decide if the actual binding configuration is valid to bind CheckoutCreditCardProcessor to the CreditCardProcessor argument in the constructor of RealBillingService it simply calls the method equals() on the annotation object. If the instance created by the Java runtime (remember that Java runtime creates an instance that had a name like class com.sun.proxy.$Proxy1) and the instance we provided are equal then the binding configuration is used otherwise some other binding has to match.

There is another catch. It is not enough to implement equals(). You may (and if you are a Java programmer (and you are why else you read this article (you are certainly not a lisp programmer)) you also should) remember that if you override equals() you have to override also hashCode(). And actually you should provide an implementation that does the same calculation as the class created by the Java runtime. The reason for this is that the comparison may not directly be performed by the application. It may (and it does) happen that Guice is looking up the annotation objects from a Map. In that case the hash code is used to identify the bucket in which the comparing object has to be and the method equals() is used afterwards to check the identity. If the method hashCode() returns different number in case of the Java runtime created and out objects they will not even match up. equals() would return true, but it is never invoked for them because the object is not found in the map.

The actual algorithm for the method hashCode is described on the documentation of the interface java.lang.annotation. I have seen this documentation before but understood the reason why the algorithm is defined when I first used Guice and implemented a similar annotation interface implementing class.

The final thing is that the class also has to implement annotationType(). Why? If I ever figure that out I will write about that.

Java compile in Java

In a previous post I wrote about how to generate a proxy during run-time and we got as far as having Java source code generated. However to use the class it has to be compiled and the generated byte code to be loaded into memory. That is “compile” time. Luckily since Java 1.6 we have access the Java compiler during run time and we can, thus mix up compile time into run time. Though that may lead a plethora of awful things generally resulting unmaintainable self modifying code in this very special case it may be useful: we can compile our run-time generated proxy.

Java compiler API

The Java compiler reads source files and generates class files. (Assembling them to JAR, WAR, EAR and other packages is the responsibility of a different tool.) The source files and class files do not necessarily need to be real operating system files residing in a magnetic disk, SSD or memory drive. After all Java is usually good about abstraction when it comes to the run-time API and this is the case now. These files are some “abstract” files you have to provide access to via an API that can be disk files but the same time they can be almost anything else. It would generally be a waste of resources to save the source code to disk just to let the compiler running in the same process to read it back and to do the same with the class files when they are ready.

The Java compiler as an API available in the run-time requires that you provide some simple API (or SPI of you like the term) to access the source code and also to send the generated byte code. In case we have the code in memory we can have the following code (from this file):

public Class<?> compile(String sourceCode, String canonicalClassName)
			throws Exception {
		JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
		List<JavaSourceFromString> sources = new LinkedList<>();
		String className = calculateSimpleClassName(canonicalClassName);
		sources.add(new JavaSourceFromString(className, sourceCode));

		StringWriter sw = new StringWriter();
		MemoryJavaFileManager fm = new MemoryJavaFileManager(
				compiler.getStandardFileManager(null, null, null));
		JavaCompiler.CompilationTask task = compiler.getTask(sw, fm, null,
				null, null, sources);

		Boolean compilationWasSuccessful = task.call();
		if (compilationWasSuccessful) {
			ByteClassLoader byteClassLoader = new ByteClassLoader(new URL[0],
					classLoader, classesByteArraysMap(fm));

			Class<?> klass = byteClassLoader.loadClass(canonicalClassName);
			byteClassLoader.close();
			return klass;
		} else {
			compilerErrorOutput = sw.toString();
			return null;
		}
	}

This code is part of the opensource project Java Source Code Compiler (jscc) and it is in the file Compiler.java.

The compiler instance is available through the ToolProvider and to create a compilation task we have to invoke getTask(). The code write the errors into a string via a string writer. The file manager (fm) is implemented in the same package and it simply stored the files as byte arrays in a map, where the keys are the “file names”. This is where the class loader will get the bytes later when the class(es) are loaded. The code does not provide any diagnistic listener (see the documentation of the java compiler in the RT), compiler options or classes to be processed by annotation processors. These are all nulls. The last argument is the list of source codes to compile. We compile only one single class in this tool, but since the compiler API is general and expects an iterable source we provide a list. Since there is another level of abstraction this list contains JavaSourceFromStrings.

To start the compilation the created task has to be “call”ed and if the compilation was successful the class is loaded from the generated byte array or arrays. Note that in case there is a nested or inner class inside the top level class we compile then the compiler will create several classes. This is the reason we have to maintain a whole map for the classes and not a single byte array even though we compile only one source class. If the compilation was not successful then the error output is stored in a field and can be queried.

The use of the class is very simple and you can find samples in the unit tests:

	private String loadJavaSource(String name) throws IOException {
		InputStream is = this.getClass().getResourceAsStream(name);
		byte[] buf = new byte[3000];
		int len = is.read(buf);
		is.close();
		return new String(buf, 0, len, "utf-8");
	}
...
	@Test
	public void given_PerfectSourceCodeWithSubClasses_when_CallingCompiler_then_ProperClassIsReturned()
			throws Exception {
		final String source = loadJavaSource("Test3.java");
		Compiler compiler = new Compiler();
		Class<?> newClass = compiler.compile(source, "com.javax0.jscc.Test3");
		Object object = newClass.newInstance();
		Method f = newClass.getMethod("method");
		int i = (int) f.invoke(object, null);
		Assert.assertEquals(1, i);
	}

Note that the classes you create this way are only available to your code during run-time. You can create immutable versions of your objects for example. If you want to have classes that are available during compile time you should use annotation processor like scriapt.

Optimize the client for the server’s sake

The Story

Once upon a time there was an application that was running on some server and the client functionality was implemented in HTML/CSS and JavaScript. The application was serving trillion (not literally) of users all hanging on the end of some phone lines talking to customers who were usually impatient and needed fast resolution to their problems. Typical call center application where speed is key.

Users were dissatisfied by the speed of the service.

No surprise. They usually are.

The application was delivering static resources for the client and JSON encoded data via REST interface. The underlying data structure was using relational database managed from Java using JOOQ. All good technologies were applied to make the service as fast as possible, still the performance was not accepted by the users. Users claimed that the system was slow, unusable, annoying, dead as fish frozen in a lake (yes, actually that was one of the expression we got in the ticketing system). We were aware that “unusable” was some exaggeration: after all there were thousands of queries running through the system daily. But “slow” and “annoying” are not measurable terms not to mention “dead fish”. First thing first: we had to measure!

Measure

To address the issue we injected some JavaScript that was measuring the actual performance and it was also reporting the client measured response times to a separate server via some very simple and very fast REST service. We paid attention not to put extra load on the original servers not to make the situation even worse. The result showed that some of the results arrived to the client within 1sec, most of them in 2sec but there was actually a significant tail of the Poisson distribution with some responses as long as 15sec. We also had the measurement on the server side and the results were similar. On the server side we measured approximately 10% more transactions that were lost for the measurement on the client and the Poisson tail on the server contained responses up to 90sec. We did not pay attention to these differences until a bit later.

Meeting the requirements may not be enough.

The actual measurements showed that the response times were in-line with the requirement so we created a report showing all good and shiny hoping that this will settle the story. We presented the results to the management and we almost got fired. They were not interested in measurements and response time milisecs. All they cared was user satisfaction. (Btw: At this point I understood why the name “user acceptance test” is not “customer acceptance test”.) We were blatantly directed not to mess with some useless measurements but go and stand by some of the users and experience direct eyes how slow the system was. It was a kind of shock. Standing by a user and “feeling” the system speed was not considered to be an engineering approach. But having nothing else in hand we did. And it worked!

Assess

We could see that some of the users were impatient. They clicked on a button and after a second when nothing happened they clicked on it again. It meant that the browser was sending a request to the server but before the response arrived the communication was cancelled on the client side and the request was sent again. Processing started from zero by the second button press but the wait time for the user accumulated.

Fix

To help the patience of the users we introduced some hour glass effect on the JavaScript level that signalled to the user that they have pressed a button and that the button press was handled by the application. Also the hour glass was moving “entertaining” the users and we also hid the button (and the whole filled in form) behind a semitransparent DIV layer actually preventing double submit. We did not have high expectations. Afterall it did not make the system faster. The users loved the new feature. First of all they felt that we care. They had been complaining and now we were doing something for them. Interestingly they also felt the system faster because of the rotating hour glass on the screen. End of story? Almost.

Learn

After a week or so we executed the measurement again. It was not a big effort since all the tooling was already there. What we experienced was that the 10% difference between the number of transactions measured on the client and on the server practically vanished. Probably these were the transactions when the user pressed the button second time. It was a full processing run on the server side, but was not reported by the client since the transaction as well as the measurement on the client side was cancelled. These got eliminated with the improved user interface that also decreased the load on the server by 10%. Which finally resulted slightly faster response times.

Usual disclaimers apply.

The Little Architect

Uncle Bob published recently an article titled “A Little Architecture“. The article is a conversation between a young developer and a senior (Uncle Bob himself presumably) about being software architect. The article starts with these sentences:

  • I want to become a Software Architect.
  • That’s a fine goal for a young software developer.
  • I want to lead a team and make all the important decisions about databases and frameworks and web-servers and all that stuff.

The next part asks the young developer to list what the important things are. However that is not the only thing that may be interesting in this last sentence. There is another thing, perhaps less technical, that hit me. The young developer says: “make … decisions”.

That may be a mistake. You can interpret it differently what “making decision” means, but let me here tell you my thoughts about that. Some thoughts that were triggered by those two words. First of all here is a story, when I was making some decisions.

Story

Not really many years ago when I was much younger I was acting as system architect and I made a decision on how to store some content. Mainly text and not too large pictures. The obvious choice could be to use database and implement the CRUD operations. A database is always a good solution just as a scarf is always a good gift for Christmas. You love getting a new scarf for every Christmas, don’t you?

On second thought, however the real power of database is when the content is to be searched, indexed and when transactions are executed. They are not really requirements for a media store. On the other hand versioning and user level access control was. I have previously implemented something like that in the past and that time we used SVN for content storage. And that worked fine. So I decided that we should go and use SVN this time also. The project was a success story. A little bit more story than success though. Halfway thriving towards the solution the back-end storage was replaced by a DB layer.

Why didn’t SVN work?

The reason is simple. The developers did not like and understood the decision. They were not familiar with the technology. They used SVN for source code storage but they never used the programming API of it. Instead of using the Java client they forked external svn processes and they were checking out files individually. Displaying a directory containing 20 files was starting 20 processes one after the other. On that system that was approx. 20 seconds.

Okay. It could have been mended in different ways: there was not enough control on the use of the technology and there was a lack of professional code review as well as performance testing due time and so on. The root of the problems though was that I made the decision. I was acting like an omnipotent god, who knows it much better. I was not and I did not.

So what?

I could do it better discussing the solution more with the developers until we all agree on what the solution could have been. I could understand that the DB solution was better or they could understand how the SVN could have been used that way. We could make a decision together. I could make it so that they could make the decision.

A real architect never makes a decision.

A real architect works with the team developing the software asking the right questions making sure that the team make the right decision.

Good architects approve the decision of the team and bear the responsibility. Bad architects make the decisions and blame the team.

Part of it is psychology. If the team makes the decision they are more likely to love the ideas than if they were force fed. They may come up with some ideas that you missed. Good architects recognize that and improve him/herself. Really good architect can even admit at this stage being wrong. On the contrary of what young developers think this increases the esteem. (Unless the architect is wrong more times than not, in which case he/she is not really a good architect.)

Asking the questions also reveal if the team is not prepared for some of the technologies. If they have to learn something new. It may turn out that education is in place or some more familiar technology is to be used. This may also be a smell that you wanted to use some niche technology that may require expensive developers in the coming years to maintain the product. You better don’t!

This does not mean that you should open the floodgates. You still should approve the decision and you should not approve a decision you can not live with. If the team makes a decision on some technology that you feel not good enough it means you have not asked the right questions. You should ask more. The responsibility is your.

I recommend that if you want to be a good architect let the team make the decision and help them forging a good one. Approve it and never blame them. That way they will not leave you in cold water. If you even bring free pizza now and then they may even love you.

Creating proxy object using djcproxy

During the last weeks I have shown how to create a proxy object using Java reflection API and cglib. In this article I will show you how this can be done using djcproxy.

Oh, not again, another proxy implementation!

What is the point to write about this in addition to the selfish fact that I created this proxy? The point is that this is a proxy that is written in Java, it creates Java code that can be examined. It also compiles and loads the created Java classes on the fly so it is also usable but the main advantage is that you can easily get a good insight how a dynamic proxy works. At least a bit easier than digging around the code of cglib, which is creating byte codes directly.

How to use it

You can get the source from github or you can just add the dependency to you project maven pom.

<dependency>
	<groupId>com.javax0</groupId>
	<artifactId>djcproxy</artifactId>
	<version>2.0.3</version>
</dependency>

After that you can use the following code:

class A {
  public int method() {
  return 1;
  }
}
class Interceptor implements MethodInterceptor {

  @Override
  public Object intercept(Object obj, Method method, Object[] args,
    MethodProxy mproxy) throws Exception {
      if (method.getName().equals("toString")) {
        return "interceptedToString";
      }
      return 0;
  }
}

 ...

    A a = new A();
    ProxyFactory<A> factory = new ProxyFactory<>();
    A s = factory.create(a, new Interceptor());

This code can be found in the tests of the project in GitHub. This is an edited abbreviated version prone to editing errors.

The class ‘A’ is the original class and when we want to create a new proxy object we create a proxy to an already existing object. This is different from reflection or cglib. In case of cglib you create a proxy object and it “contains” the original object. It is not really a containment in OO terms, because the proxy class extends the original class. However because of this extending the proxy object is also an instance of the original class. Cglib does not really care which class instance (object) you want to intercept. You can inject a reference to any object instance to your interceptor if you want. Djcproxy uses a different approach and it does that for you and in your interceptor you will get this object passed as argument. This is why you have to instantiate the object in line 20.

The Interceptor implements the interface MethodInterceptor also provided in the library. It has only one method: intercept, which is invoked when the proxy object method is called. The arguments are

  • obj – the original object
  • method – the method that was invoked in the proxy object
  • args – the arguments that were passed to the method call on the proxy object. Note that primitive arguments will be boxed.
  • mproxy – the method proxy that can be used to call the method on the original object or on just any other object of the same type

This is all about how to use this library. The next thing is to have a look at what is generated so that you can get a better understanding how a proxy works. Insight never hurts, even if you use a different proxy. Many times debugging or just generating better code is easier when you know the principles of a library you use.

While cglib gives you a static factory method to create new objects djcproxy requires that you create a proxy factory. This is on line numbered above 21. If you want to use it the same way as you used cglib you can declare a static ProxyFactory field in the class where you want to use the factory from. On the other hand it is possible to have different factories in different parts of the code. Although the advantage of it is rare, still I believe it is a cleaner approach than providing static factory method.

How does the proxy work?

The extra thing in this package is that it lets you get access to the generated source. You can insert the lines

    String generatedSource = factory.getGeneratedSource();
    System.out.println(generatedSource);

to print out the generated proxy class which is after some formatting is this:

package com.javax0.djcproxy;

class PROXY$CLASS$A extends com.javax0.djcproxy.ProxyFactoryTest.A implements com.javax0.djcproxy.ProxySetter {
    com.javax0.djcproxy.ProxyFactoryTest.A PROXY$OBJECT = null;
    com.javax0.djcproxy.MethodInterceptor PROXY$INTERCEPTOR = null;

    public void setPROXY$OBJECT(java.lang.Object PROXY$OBJECT) {
        this.PROXY$OBJECT = (com.javax0.djcproxy.ProxyFactoryTest.A) PROXY$OBJECT;

    }

    public void setPROXY$INTERCEPTOR(com.javax0.djcproxy.MethodInterceptor PROXY$INTERCEPTOR) {
        this.PROXY$INTERCEPTOR = PROXY$INTERCEPTOR;

    }

    PROXY$CLASS$A() {
        super();

    }

    private com.javax0.djcproxy.MethodProxy method_MethodProxyInstance = null;

    @Override
    public int method() {

        try {
            if (null == method_MethodProxyInstance) {
                method_MethodProxyInstance = new com.javax0.djcproxy.MethodProxy() {
                    public java.lang.Object invoke(java.lang.Object obj, java.lang.Object[] args) throws Throwable {
                        return ((com.javax0.djcproxy.ProxyFactoryTest.A) obj).method();

                    }
                };
            }
            return (int) PROXY$INTERCEPTOR.intercept(
                    PROXY$OBJECT, PROXY$OBJECT.getClass().getMethod("method", new Class[]{}),
                    new Object[]{}, method_MethodProxyInstance);
        } catch (Throwable e) {
            throw new RuntimeException(e);
        }

    }


... other overridden methods deleted ...

}

Note that the class A is a static nested class of ProxyFactoryTest for this generated code.

The interesting code is the overriding of the method method(). (Sorry for the name. I have no fantasy to have a better name for a method that does nothing.) Let’s skip the part where the method checks if there is already a MethodProxy instance and if is missing it creates one. The method method() actually calls the interceptor object that we defined, passing the proxied object, the reflective method object, the arguments and also the method proxy.

What is the method proxy

The name may be confusing first because we already have an “object” proxy. There is a separate method proxy for each method of the original class. These can be used to invoke the original method without reflective call. This speeds up the usage of the proxies. You can also find this call and a similar mechanism in cglib.

Notes

The implementation has some flows, for example the late method proxy instantiations have no advantage really but the same time may hurt in case of multi-thread execution of the proxies. It could also be possible to create a proxy object that not only extends a class but also implement arbitrary interfaces (perhaps some that is not even implemented by the extended class). The implementation is used in some other hobby opensource project also available on github about which I may write in the future. They are more demonstrative, educational and proof of concept projects than production code. If you have anything to say on the implementation, the ideas, or just any comments, please reward me with your comments.

Creating a proxy object using cglib

In the previous post I was talking about the standard Java based proxy objects. These can be used when you want to have a method invocation handler on an object that implements an interface. The Java reflection proxy creation demands that you have an object that implements the interface. The object we want to proxy is out of our hand, it does not implement the interface that we want to invoke from our handler and still we want to have a proxy.

When do we need proxy to objects w/o interface?

This is a very common case. If ever we have a JPA implementation e.g. Hibernate that implements lazy loading of the records. For example the audit log records are stored in a table and each record, except the first one has a reference to the previous item. Something like

class LinkedAuditLogRecord {
  LinkedAuditLogRecord previous;
  AuditLogRecord actualRecord;
}

Loading a record via JPA will return an object LinkedAuditLogRecord which contains the previous record as an object and so on until the first one that probably has null in the field named previos. (This is not an actual code.) Any JPA implementation grabbing and loading the whole table from the start to the record of our interest would be an extremely poor implementation. Instead the persistence layer loads the actual record only and creates a proxy object extending LinkedAuditLogRecord and that is what the field previous is going to be. The actual fields are usually private fields and if ever our code tries to access the previous record the proxy object will load it that time. This is lazy loading in short.

But how do the JPA implementations create proxies to objects of classes that do not implement interfaces? Java reflection proxy implementation can not do that and thus JPA implementation uses something different. What they usually use is cglib.

What is cglib

Cglib is an open source library that capable creating and loading class files in memory during Java run time. To do that it uses Java byte-code generation library ‘asm’, which is a very low level byte code creation tool. I will not dig that deep in this article.

How to use cglib

To create a proxy object using cglib is almost as simple as using the JDK reflection proxy API. I created the same code as the last week article, this time using cglib:

package proxy;

import net.sf.cglib.proxy.Enhancer;
import net.sf.cglib.proxy.MethodInterceptor;
import net.sf.cglib.proxy.MethodProxy;

import java.lang.reflect.Method;

public class CglibProxyDemo {

    static class Original {
        public void originalMethod(String s) {
            System.out.println(s);
        }
    }

    static class Handler implements MethodInterceptor {
        private final Original original;

        public Handler(Original original) {
            this.original = original;
        }

        public Object intercept(Object o, Method method, Object[] args, MethodProxy methodProxy) throws Throwable {
            System.out.println("BEFORE");
            method.invoke(original, args);
            System.out.println("AFTER");
            return null;
        }
    }

    public static void main(String[] args){
        Original original = new Original();
        MethodInterceptor handler = new Handler(original);
        Original f = (Original) Enhancer.create(Original.class,handler);
        f.originalMethod("Hallo");
    }
}

The difference is that name of the classes are a bit different and we do not have an interface.

It is also important that the proxy class extends the original class and thus when the proxy object is created it invokes the constructor of the original class. In case this is resource hungry we may have some issue with that. However this is something that we can not circumvent. If we want to have a proxy object to an already existing class then we should have either an interface or we have to extend the original class, otherwise we just could not use the proxy object in place of the original one.

Java Dynamic Proxy

Proxy is a design pattern. We create and use proxy objects when we want to add or modify some functionality of an already existing class. The proxy object is used instead of the original one. Usually the proxy objects have the same methods as the original one and in Java proxy classes usually extend the original class. The proxy has a handle to the original object and can call the method on that.

This way proxy classes can implement many things in a convenient way:

  • logging when a method starts and stops
  • perform extra checks on arguments
  • mocking the behavior of the original class
  • implement lazy access to costly resources

without modifying the original code of the class. (The above list is not extensive, only examples.)

In practical applications the proxy class does not directly implement the functionality. Following the single responsibility principle the proxy class does only proxying and the actual behavior modification is implemented in handlers. When the proxy object is invoked instead of the original object the proxy decides if it has to invoke the original method or some handler. The handler may do its task and may also call the original method.

Even though the proxy pattern does not only apply into situation when the proxy object and proxy class is created during run-time, this is an especially interesting topic in Java. In this article I will focus on these proxies.

This is an advanced topic because it requires the use of the reflection class, or byte code manipulation or compiling Java code generated dynamically. Or all of these. To have a new class not available as a byte code yet during run-time will need the generation of the byte code, and a class loader that loads the byte code. To create the byte code you can use cglib or bytebuddy or the built in Java compiler.

When we think about the proxy classes and the handlers they invoke we can understand why the separation of responsibilities in this case is important. The proxy class is generated during run-time, but the handler invoked by the proxy class can be coded in the normal source code and compiled along the code of the whole program (compile time).

The easiest way to do this is to use the java.lang.reflect.Proxy class, which is part of the JDK. That class can create a proxy class or directly an instance of it. The use of the Java built-in proxy is easy. All you need to do is implement a java.lang.InvocationHandler so that the proxy object can invoke that. InvocationHandler interface is extremely simple. It contains only one method: invoke(). When invoke() is invoked the arguments contain the original object, which is proxied, the method that was invoked (as a reflection Method object) and the object array of the original arguments. A sample code demonstrate the use:

package proxy;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;

public class JdkProxyDemo {

    interface If {
        void originalMethod(String s);
    }

    static class Original implements If {
        public void originalMethod(String s) {
            System.out.println(s);
        }
    }

    static class Handler implements InvocationHandler {
        private final If original;

        public Handler(If original) {
            this.original = original;
        }

        public Object invoke(Object proxy, Method method, Object[] args)
                throws IllegalAccessException, IllegalArgumentException,
                InvocationTargetException {
            System.out.println(&quot;BEFORE&quot;);
            method.invoke(original, args);
            System.out.println(&quot;AFTER&quot;);
            return null;
        }
    }

    public static void main(String[] args){
        Original original = new Original();
        Handler handler = new Handler(original);
        If f = (If) Proxy.newProxyInstance(If.class.getClassLoader(),
                new Class[] { If.class },
                handler);
        f.originalMethod(&quot;Hallo&quot;);
    }

}

If the handler wants to invoke the original method on the original object it has to have access it. This is not provided by the Java proxy implementation. You have to pass this argument to the handler instance yourself in your code. (Note that there is an object usually named proxy passed as an argument to the invocation handler. This is the proxy object that the Java reflection dynamically generate and not the object we want to proxy.) This way you are absolutely free to use a separate handler object for each original class or use some shared object that happens to know some way which original object to invoke if there is any method to invoke at all.

As a special case you can create an invocation handler and a proxy of an interface that does not have any original object. Even more it is not needed to have any class to implement the interface in the source code. The dynamically created proxy class will implement the interface.

What should you do if the class you want to proxy does not implement an interface? In that case you have to use some other proxy implementation. We will look at about that next week.

Value types in Java: why should they be immutable?

Value types need not be immutable. But they are.

In the previous post I discussed the difference between pointers and references in Java and how the method parameters are passed (passed-by-value or passed-by-reference). These are strongly related to value types that do not exist in Java (yet).

There is a proposal from John Rose, Brian Goetz, and Guy Steele detailing how value types will/may work in Java and also there are some good articles about it. I have read “Value Types: Revamping Java’s Type System” that I liked a lot and I recommend to read. If the proposal is too dense for you to follow the topic you can read that article first. It summarizes very much the background, what value types are, advantages, why it is a problem that Java does not implement value types and why it is not trivial. Even though the terminology “value type” may also be used to denote something different I will use it as it is used in the proposal and in the article.

How do we pass arguments vs. what do we store in variables

As you may recall from the previous article I detailed that Java passes method arguments by reference or by value depending on the type of the argument:

  • reference is passed when the argument is an object
  • by-value when the argument is primitive.

There are some comments on the original post and also on the JCG republish that complain about my terminology about passing an argument by-reference. The comments state that arguments are always passed by value because the variables already contain reference to the objects. In reality variables, however contain bits. Even though this is important to know how we imagine those bits, and what terminology we use when we communicate. We can either say that

  1. class variables contain objects and in that case we pass these objects to methods by-reference
  2. or we can say that the variables contain the reference and in that case we pass the value of the variables.

If we follow the thinking #1 then the argument passing is by-value and/or by-reference based on the actual nature of the argument (object or primitive). If we follow the thinking #2 then the variables store reference and/or values based on the nature of their type. I personally like to think that when I write

Triangle triangle;

then the variable triangle is a triangle and not a reference to a triangle. But it does not really matter what it is in my brain. In either of the cases #1 or #2 there is a different approach for class types and for primitives. If we introduce value types to the language the difference becomes more prevalent and important to understand.

Value types are immutable

I explained that the implicit argument passing based on type does not cause any issue because primitives are immutable and therefore, when passed as method argument, they could not be changed even if they were passed by reference. So we usually do not care. Value types are not different. Value types are also immutable because they are values and values do not change. For example the value of PI is 3.145926… and it never changes.

However, what does this immutability mean in programming? Values be real numbers, integers or compound value types are all represented in memory as bits. Bits in memory (unless memory is ROM) can be changed.

In case of an object immutability is fairly simple. There is an object somewhere in the universe that we can not alter. There can be numerous variables holding the object (having a reference to it) and the code can rely on the fact that the bits at the memory location where the actual value of the object is represented are not changed (more or less).

In case of value types this is a bit different and this difference comes from the different interpretation of the bits that represent a value type from the same bits when they may represent an object.

Value types have no identity

Value types do not have identity. You can not have two int variables holding the value 3 and distinguish one from the other. They hold the same value. This is the same when the type is more complex.

Say I have a value type that has two fields, like

ValueType TwoFields {
  int count;
  double size;
  }

and say I have two variables

 Twofields tF1 = new TwoFields(1,3.14)
 Twofields tF2 = new TwoFields(1,3.14)

I can not tell the variables tF1 and tF2 from other. If they were objects they would be equals to each other but not == to each other. In case of value types there is not == as they have no identity.

If TwoFields is immutable class I can not or should not write

 TwoFields tF;
  ...
 tF.count++;

or some similar construct. But I still can write

 TwoFields tF;
  ...
 tF = new TwoFields(tF.count+1, tF.size)

which leaves the original object intact. If TwoFields is a value type then either of the constructs, whichever is allowed, will create a new value.

Value types as arguments

How are value types passed as method argument then? Probably copying the value to the parameter variable. Possibly passing some reference. It is, however, up to the compiler (be it Java, or some other language). Why?

  • Value types are usually small. At least they should be small. A huge value type looses the advantages that value types deliver but have the disadvantages.
  • Value types are immutable so there is no problem copying them just like in case of primitives. They can be passed by value the same way as “everything in Java is passed by value”.
  • They have no identity, there can be no references to them.

But this is not only about passing them as arguments. This is also how variables are assigned. Look at the code

 Twofields tF1 = new TwoFields(1,3.14)
 Twofields tF2 = new TwoFields(1,3.14)

and compare it to

 Twofields tF1 = new TwoFields(1,3.14)
 Twofields tF2 = tF1

If TwoFields is a value type there should be no difference between the two versions. They have to produce the same result (though may not through the same code when compiled). In this respect there is no real difference between argument passing and variable assignment. Values are copied even if the actual variables as bits contain some references to some memory locations where the values are stored.

Summary

As I started the article: value types need not be immutable. This is not something that the language designers decide. They are free to implement something that is mutable, but in that case it will not be value type. Value types are immutable.

Pointers in Java

Are there pointers in Java? The short answer is “no, there are none” and this seems to be obvious for many developers. But why is it not that obvious for others?

That is because the references that Java uses to access objects are very similar to pointers. If you have experience with C programming before Java it may be easier to think about the values that are stored in the variables as pointers that point to some memory locations holding the objects. And it is more or less ok. More less than more but that is what we will look at now.

Difference between reference and pointer

As Brian Agnew summarized on stackoverflow there are two major differences.

  1. There is no pointer arithmetic
  2. References do not “point” to a memory location

Missing pointer arithmetic

When you have an array of a struct in C the memory allocated for the array contains the content of the structures one after the other. If you have something like

struct circle {
   double radius;
   double x,y;
}
struct circle circles[6];

it will occupy 6*3*sizeof(double) bytes in memory (that is usually 144 bytes on 64 bit architecture) in a continuous area. If you have something similar in Java, you need a class (until we get to Java 10 or later):

class Circle {
   double radius;
   double x,y;
}

and the array

Circle circles[6];

will need 6 references (48 bytes or so) and also 6 objects (unless some of them are null) each 24bytes data (or so) and object header (16bytes). That totals to 288bytes on a 64bit architecture and the memory area is not continuous.

When you access an element, say circles[n] of the C language array the code uses pointer arithmetic. It uses the address stored in the pointer circles adds n times sizeof(struct circle) (bytes) and that is where the data is.

The Java approach is a bit different. It looks at the object circles, which is an array, calculates the n-th element (this is similar to C) and fetches the reference data stored there. After the reference data is at hand it uses that to access the object from some different memory location where the reference data leads.

Note that in this case the memory overhead of Java is 100% and also the number of memory reads is 2 instead of 1 to access the actual data.

References do not point to memory

Java references are not pointer. They contain some kind of pointer data or something because that comes from the nature of today computer architecture but this is totally up to the JVM implementation what it stores in a reference value and how it accesses the object it refers to. It could be absolutely ok though not too effective implementation to have a huge array of pointers each pointing to an object of the JVM and the references be indices to this array.

In reality JVM implement the references as a kind of pointer mix, where some of the bits are flags and some of the bits are “pointing” to some memory location relative to some area.

Why do JVMs do that instead of pointers?

The reason is the garbage collection. To implement an effective garbage collection and to avoid the fragmentation of the memory the JVM regularly moves the objects around in the memory. When memory occupied by objects not referenced anymore are freed and we happen to have a small object still used and referenced in the middle of a huge memory block available we do not want that memory block to be split. Instead the JVM moves the object to a different memory area and updates all the references to that object to keep track of the new location. Some GC implementations stop the other Java threads for the time these updates happen, so that no Java code uses a reference not updated but objects moved. Other GC implementations integrate with the underlying OS virtual memory management to cause page fault when such an access occurs to avoid the stopping of the application threads.

However the thing is that references are NOT pointers and it is the responsibility of the implementation of the JVM how it manages all these situations.

The next topic strongly related to this area is parameter passing.

Are parameters passed by value or passed by reference in Java?

The first programming language I studied at the uni was PASCAL invented by Niklaus Wirth. In this language the procedure and function arguments can be passed by value or by reference. When a parameter was passed by reference then the declaration of the argument in the procedure or function head was preceded by the keyword VAR. At the place of the use of the function the programmer is not allowed to write an expression as the actual argument. You have to use a variable and any change to the argument in the function (procedure) will have effect on the variable passed as argument.

When you program in language C you always pass a value. But this is actually a lie, because you may pass the value of a pointer that points to a variable that the function can modify. That is when you write things like char *s as an argument and then the function can alter the character pointed by s or a whole string if it uses pointer arithmetic.

In PASCAL the declaration of pass-by-value OR pass-by-reference is at the declaration of the function (or procedure). In C you explicitly have to write an expression like &s to pass the pointer to the variable s so that the caller can modify it. Of course the function also has to be declared to work with a pointer to a whatever type s has.

When you read PASCAL code you can not tell at the place of the actual function call if the argument is passed-by-value and thus may be modified by the function. In case of C you have to code it at both of the places and whenever you see that the argument value &s is passed you can be sure that the function is capable modifying the value of s.

What is it then with Java? You may program Java for years and may not face the issue or have a thought about it. Java solves the issue automatically? Or just gives a solution that is so simple that the dual pass-by-value/reference approach does not exist?

The sad truth is that Java is actually hides the problem, does not solve it. So long as long we work only with objects Java passes by reference. Whatever expression you write to the actual function call when the result is an object a reference to the object is passed to the method. If the expression is a variable then the reference contained by the variable (which is the value of the variable, so this is a kind of pass-by-value) is passed.

When you pass a primitive (int, boolean etc) then the argument is passed by value. If the expression evaluated results a primitive then it is passed by value. If the expression is a variable then the primitive value contained by the variable is passed. That way we can say looking at the three example languages that

  • PASCAL declares how to pass arguments
  • C calculates the actual value where it is passed
  • Java decides based on the type of the argument

Java, in my opinion, is a bit messy. But I did not realized it because this messiness is limited and is hidden well by the fact that the boxed versions of the primitives are immutable. Why would you care the underlying mechanism of argument passing if the value can not be modified anyway. If it is passed by value: it is OK. If it passed by reference, it is still okay because the object is immutable.

Would it cause problem if the boxed primitive values were mutable? We will see if and when we will have value types in Java.