Author Archives: Peter Verhas

About Peter Verhas

Java expert, developer, architect, author, speaker, teacher, mentor and he very much likes to be alive.

Get rid of pom XML… almost

Introduction

POM files are XML formatted files that declaratively describe the build structure of a Java project to be built using Maven. Maintaining the POM XML files of large Java projects is many times cumbersome. XML is verbose and also the structure of the POM requires the maintenance of redundant information. The naming of the artifacts many times is redundant repeating some part of the name in the groupId and in the artifactId. The version of the project should appear in many files in case of a multi-module project. Some of the repetitions can be reduced using properties defined in the parent pom, but you still have to define the parent pom version in each module pom, because you refer to a POM by the artifact coordinates and not just referring to it as “the pom that is there in the parent directory”. The parameters of the dependencies and the plugins can be configured in the parent POM in the pluginManagement and dependency management but you still can not get rid of the list of the plugins and dependencies in each and every module POM though they are usually just the same.

You may argue with me because it is also the matter of taste, but for me, POM files in their XML format are just too redundant and hard to read. Maybe I am not meticulous enough but many times I miss some errors in my POM files and have a hard time to fix them.

There are some technologies to support other formats, but they are not widely used. One such approach to get rid of the XML is Poyglot Maven. However, if you look on that Github page at the very first example, which is Ruby format POM you can still see a lot of redundant information, repetitions. This is because Polyglot Maven plugs-into Maven itself and replaces only the XML format to something different but does not help on the redundancy of the POM structure itself.

In this article, I will describe an approach that I found much better than any other solution, where the POM files remain XML for the build process, thus there is no need for any new plugin or change of the build process, but these pom.xml files are generated using the Jamal macro language from the pom.xml.jam file and some extra macro files that are shared by the modules.

Jamal

The idea is to use a text-based macro language to generate the XML files from some source file that contains the same information is a reduced format. This is some kind of programming. The macro description is a program that outputs the verbose XML format. When the macro language is powerful enough the source code can be descriptive enough and not too verbose. My choice was Jamal. To be honest, one of the reasons to select Jamal was that it is a macro language that I developed almost 20 years ago using Perl and a half year ago I reimplemented it in Java.

The language itself is very simple. Text and macros are mixed together and the output is the text and the result of the macros. The macros start with the { character or any other string that is configured and end by the corresponding } character or by the string that was configured to be the ending string. Macros can be nested and there is fine control what order the nested macros should be evaluated. There are user-defined and built-in macros. One of the built-in macros is define that is used to define user-defined macros.

An example talks better. Let’s have a look at the following test.txt.jam file.

{@define GAV(_groupId,_artifactId,_version)=
    {#if |_groupId|<groupId>_groupId</groupId>}
    {#if |_artifactId|<artifactId>_artifactId</artifactId>}
    {#if |_version|<version>_version</version>}
}

{GAV :com.javax0.geci:javageci-parent:1.1.2-SNAPSHOT}

processing it with Jamal we will get


    <groupId>com.javax0.geci</groupId>
    <artifactId>javageci-parent</artifactId>
    <version>1.1.2-SNAPSHOT</version>

I deleted the empty lines manually for typesetting reasons though, but you get a general idea. GAV is defined using the built-in macro define. It has three arguments named _groupId,_artifactId and _version. When the macro is used the format argument names in the body of the macro are replaced with the actual values and replace the user-defined macro in the text. The text of the define built-in macro itself is an empty string. There is a special meaning when to use @ and when to use # in front of the built-in macros, but in this article, I cannot get into that level of detail.

The if macros also make it possible to omit groupId, artifactId or version, thus

{GAV :com.javax0.geci:javageci-parent:}

also works and will generate

    <groupId>com.javax0.geci</groupId>
    <artifactId>javageci-parent</artifactId>

If you feel that still there is a lot of redundancy in the definition of the macros: you are right. This is the simple approach defining GAV, but you can go to the extreme:

{#define GAV(_groupId,_artifactId,_version)=
    {@for z in (groupId,artifactId,version)=
        {#if |_z|<z>_z</z>}
    }
}{GAV :com.javax0.geci:javageci-parent:}

Be warned that this needs an insane level of understanding of macro evaluation order, but as an example, it shows the power. More information on Jamal https://github.com/verhas/jamal

Let’s get back to the original topic: how Jamal can be used to maintain POM files.

Cooking pom to jam

There can be many ways, which each may be just good. Here I describe the first approach I used for the Java::Geci project. I create a pom.jim file (jim stands for Jamal imported or included files). This contains the definitions of macros, like GAV, dependencies, dependency and many others. You can download this file from the Java::Geci source code repo: https://github.com/verhas/javageci The pom.jim file can be the same for all projects, there is no any project specific in it. There is also a version.jim file that contains the macro that defines at one single place the project version, the version of Java I use in the project and the groupId for the project. When I bump the release number from -SNAPSHOT to the next release or from the release to the next -SNAPSHOT this is the only place where I need to change it and the macro can be used to refer to the project version in the top level POM? but also in the module POMs referring to the parent.

In every directory, where there should a pom.xml file I create a pom.xml.jam file. This file imports the pom.jim file, so the macros defined there can be used in it. As an example the Java::Geci javageci-engine module pom.xml.jam file is the following:

{@import ../pom.jim}
{project |jar|
    {GAV ::javageci-engine:{VERSION}}
    {parent :javageci-parent}
    {name|javageci engine}
    {description|Javageci macro library execution engine}

    {@include ../plugins.jim}

    {dependencies#
        {@for MODULE in (api,tools,core)=
            {dependency :com.javax0.geci:javageci-MODULE:}}
        {@for MODULE in (api,engine)=
            {dependency :org.junit.jupiter:junit-jupiter-MODULE:}}
    }
}

I think that this is fairly readable, at least for me it is more readable than the original pom.xml was:

<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <packaging>jar</packaging>
    <artifactId>javageci-engine</artifactId>
    <version>1.1.1-SNAPSHOT</version>
    <parent>
        <groupId>com.javax0.geci</groupId>
        <artifactId>javageci-parent</artifactId>
        <version>1.1.1-SNAPSHOT</version>
    </parent>
    <name>javageci engine</name>
    <description>Javageci macro library execution engine</description>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-source-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-javadoc-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
    <dependencies>
        <dependency>
            <groupId>com.javax0.geci</groupId>
            <artifactId>javageci-api</artifactId>
        </dependency>
        <dependency>
            <groupId>com.javax0.geci</groupId>
            <artifactId>javageci-tools</artifactId>
        </dependency>
        <dependency>
            <groupId>com.javax0.geci</groupId>
            <artifactId>javageci-core</artifactId>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-engine</artifactId>
        </dependency>
    </dependencies>
</project>

To start Jamal I can use the Jamal Maven plugin. To do that the easiest way is to have a genpom.xml POM file in the root directory, with the content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.javax0.jamal</groupId>
    <artifactId>pom.xml_files</artifactId>
    <version>out_of_pom.xml.jam_files</version>
    <build>
        <plugins>
            <plugin>
                <groupId>com.javax0.jamal</groupId>
                <artifactId>jamal-maven-plugin</artifactId>
                <version>1.0.2</version>
                <executions>
                    <execution>
                        <id>execution</id>
                        <phase>clean</phase>
                        <goals>
                            <goal>jamal</goal>
                        </goals>
                        <configuration>
                            <transformFrom>\.jam$</transformFrom>
                            <transformTo></transformTo>
                            <filePattern>.*pom\.xml\.jam$</filePattern>
                            <exclude>target|\.iml$|\.java$|\.xml$</exclude>
                            <sourceDirectory>.</sourceDirectory>
                            <targetDirectory>.</targetDirectory>
                            <macroOpen>{</macroOpen>
                            <macroClose>}</macroClose>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

Having this I can start Maven with the command line mvn -f genpom.xml clear. This not only creates all the POM files but also clears the previous compilation result of the project, which is probably a good idea when the POM file changes. It can also be executed when there is no pom.xml yet in the directory or when the file is not valid due to some bug you may have in the jam cooked POM file. Unfortunately, all recursivity has to end somewhere and it is not feasible, though possible to maintain the genpom.xml as a jam cooked POM file.

Summary

What I described is one approach to use a macro language as a source instead of raw editing the pom.xml file. The advantage is the shorter and simpler project definition. The disadvantage is the extra POM generation step, which is manual and not part of the build process. You also lose the possibility to use the Maven release plugin directly since that plugin modifies the POM file. I myself always had problems to use that plugin, but it is probably my error and not that of the plugin. Also, you have to learn a bit Jamal, but that may also be an advantage if you happen to like it. In short: you can give it a try if you fancy. Starting is easy since the tool (Jamal) is published in the central repo, the source and the documentation is on Github, thus all you need is to craft the genpom.xml file, cook some jam and start the plugin.

POM files are not the only source files that can be served with jam. I can easily imagine the use of Jamal macros in the product documentation. All you need is creating a documentationfile.md.jam file as a source file and modify the main POM to run Jamal during the build process converting the .md.jam to the resulting macro processed markdown document. You can also set up a separate POM just like we did in this article in case you want to keep the execution of the conversion strictly manual. You may even have java.jam files in case you want to have a preprocessor for your Java files, but I beg you not to do that. I do not want to burn in eternal flames in hell for giving you Jamal. It is not for that purpose.

There are many other possible uses of Jamal. It is a powerful macro language that is easy to embed into applications and also easy to extend with macros written in Java. Java::Geci also has a 1.0 version module that supports Jamal to ease code generation still lacking some built-in macros that are planned to make it possible to reach out to the Java code structure via reflections. I am also thinking about to develop some simple macros to read Java source files and to include into documentation. When I have some result in those I will write about.

If you have any idea what else this technology could be used for, do not hesitate to contact me.

Advertisements

var and language design

What is var in Java

The var predefined type introduced in Java 10 lets you declared local variables without specifying the type of the variable when you assign a value to the variable. When you assign a value to a variable the type of the expression already defines the type of the variable, thus there is no reason to type the type on the left side of the line again. It is especially good when you have some complex long types with a lot of generics, for example

HashMap<String,TreeMap<Integer,String> myMap = mapGenerator();

Generic types you could already inherit in prior Java versions but now you can simply type

var myMap = mapGenerator();

This is simpler, and most of the times more readable than the previous version. The aim of the var is mainly readability. It is important to understand that the variables declared this way will have a type and the introduction of this new predefined type (not a keyword) does not render Java to be a dynamic language. There are a few things that you can do this way that you could not before or you could do it only in a much more verbose way. For example, when you assign an instance of an anonymous class to a variable you can invoke the declared methods in the class through the var declared variables. For example:

var m = new Object{ void z(){} }
m.z();

you can invoke the method z() but the code

Object m = new Object{ void z(){} }
m.z();

does not compile. You can do that because anonymous classes actually have a name at their birth, they just lose it when the instance gets assigned to a variable declared to be the type of Object.

There is a little shady part of the var keyword. This way we violate the general rule to instantiate the concrete class but declare the variable to be the interface. This is a general abstraction rule that we usually follow in Java most of the times. When I create a method that returns a HashMap I usually declare the return value to be a Map. That is because HashMap is the implementation of the return value and as such is none of the business of the caller. What I declare in the return type is that I return something that implements the Map interface. The way I do it is my own duty. Similarly, we declare usually the fields in the classes to be of some interface type if possible. The same rule should also be followed by local variables. A few times it helped me a lot when I declared a local variable to be Set but the actual value was TreeSet and then typing the code I faced some error. Then I realized that I was using some of the features that are not Set but SortedSet. It helped me to realize that sorted-ness is important in the special case and that it will also be important for the caller and thus I had to change the return type of the method also to be SortedSet. Note that SortedSet in this example is still an interface and not the implementation class.

With the use of var we lose that and we gain a somewhat simpler source code. It is a trade-off as always. In case of the local variables the use of the variable is close in terms of source code lines to the declaration, therefore the developer can see in a glimpse what is what and what is happening, therefore the “bad” side of this tradeoff is acceptable. The same tradeoff in case of method return values or fields is not acceptable. The use of these class members can be in different classes, different modules. It is not only difficult but it may also be impossible to see all the uses of these values, therefore here we remain in the good old way: declare the type.

The future of var (just ideas)

There are cases when you cannot use var even for local variables. Many times we have the following coding pattern:

final var variable; // this does not work in Java 11
if ( some condition ) {
    variable = expression_1
    // do something here
} else {
    variable = expression_2
    // do something here
}

Here we can not use var because there is no expression assigned to the variable on the declaration itself. The compiler, however, could be extended. From now on what I talk about is not Java as it is now. It is what I imagine how it can be in some future version.

If the structure is simple and the “do something here” is empty, then the structure can be transformed into a ternary operator:

final var variable = some condition ? ( expression_1 ) : (expression_2)

In this case, we can use the var declaration even if we use an old version of Java, e.g.: Java 11. However, be careful!

var h = true ? 1L : 3.3;

What will be the actual type of the variable h in this example? Number? The ternary operator has complex and special type coercion rules, which usually do not cause any issue because the two expressions are close to each other. If we let the structure described above use a similar type coercion then the expressions are not that close to each other. As for now, the distance is far enough for Java not to allow the use of the var type definition. My personal opinion is that the var declaration should be extended sometime in the future to allow the above structure but only in the case when the two (or more in case of more complex structure) expressions have exactly the same type. Otherwise, we may end up having an expression that results in an int, another that results in a String and then what will the type of the variable be? Do not peek at the picture before answering!

(This great example was given by Nicolai Parlog.)

I can also imagine that in the future we will have something that is similar to Scala val, which is final var in Java 11. I do not like the var vs. val naming though. It is extremely sexy and geekish, but very easy to mistake one for the other. However, if we have a local variable declaration that starts with the final keyword then why do we need the var keyword after that?

Finally, I truly believe that var is a great tool in Java 11, but I also expect that it’s role will be extended in the future.

Implementing Basic REST APIs with JAX-RS

This is a guest article promoted by PACKT, the publisher I work with to get my books to the readers.

Learn how to implement basic REST APIs with JAX-RS in this article by Mario-Leander Reimer, a chief technologist for QAware GmbH and a senior Java developer and architect with several years of experience in designing complex and large-scale distributed system architectures.

This article will take a look at how to implement a REST resource using basic JAX-RS annotations. You’ll implement a REST API to get a list of books so that you’ll be able to create new books, get a book by ISBN, update books, and delete a book. The complete code for this book is also available at https://github.com/PacktPublishing/Building-RESTful-Web-Services-with-Java-EE-8.

Conceptual view of this section

You’ll create a basic project skeleton and prepare a simple class called BookResource and use this to implement the CRUD REST API for your books. First, you need to annotate your class using proper annotations. Use the @Path annotation to specify the path for your books API, which is "books" and make a @RequestScoped CDI bean.

Now, to implement your business logic, you can use another CDI bean. So, you need to get it injected into this one. This other CDI bean is called bookshelf, and you’ll use the CDI @Inject annotation to get a reference to your bookshelf. Next, implement a method to get hold of a list of all books.

What you see here is that you have a books() method, which is @GET annotated, and it produces MediaType.APPLICATION_JSON and returns a JAX-RS response. You can see that you construct a response of ok, which is HTTP 200; use bookshelf.findAll() as the body, which is a collection of books and then build the response. The BookResource.java file should look as follows:

@Path("books")
@RequestScoped
public class BookResource {

@Inject
private Bookshelf bookshelf;

@GET
@Produces(MediaType.APPLICATION_JSON)
public Response books() {
return Response.ok(bookshelf.findAll()).build();
}

Next, implement a GET message to get a specific book. To do this, you have a @GET annotated method, but this time you have the @Path annotation with the "/{isbn}" parameter. To get hold of the parameter called isbn, use the @PathParam annotation to pass the value. Use bookshelf to find your book by ISBN and return the book found using the HTTP status code 200 that is, ok:

@GET
@Path("/{isbn}")
public Response get(@PathParam("isbn") String isbn) {
Book book = bookshelf.findByISBN(isbn);
return Response.ok(book).build();
}

In order to create something, it’s a convention to use HTTP POST as a method. You consume the application JSON and expect the JSON structure of a book. You call bookshelf.create with the book parameter and then use UriBuilder to construct the URI for the just-created book; this is also a convention. Return this URI using Response.created, which matches the HTTP status code 201, and call build() to build the final response:

@POST
@Consumes(MediaType.APPLICATION_JSON)
public Response create(Book book) {
if (bookshelf.exists(book.getIsbn())) {
return Response.status(Response.Status.CONFLICT).build();
}

bookshelf.create(book);
URI location = UriBuilder.fromResource(BookResource.class)
.path("/{isbn}")
.resolveTemplate("isbn", book.getIsbn())
.build();
return Response.created(location).build();
}

You can implement the update method for an existing book. Again it’s a convention to use the HTTP method PUT. Update this by putting in a specific location. Use the @Path parameter with a value of "/{isbn}". Give a reference to this isbn here in the update() method parameter, and you have the JSON structure of your book ready. Use bookshelf.update to update the book and in the end, return the status code ok:

@PUT
@Path("/{isbn}")
public Response update(@PathParam("isbn") String isbn, Book book) {
bookshelf.update(isbn, book);
return Response.ok().build();
}

Finally, implement the delete message and use the HTTP method DELETE on the path of an identified ISBN. Using the @PathParam annotation here, call bookshelf.delete() and return ok if everything went well:

@DELETE
@Path("/{isbn}")
public Response delete(@PathParam("isbn") String isbn) {
bookshelf.delete(isbn);
return Response.ok().build();
}

This is the CRUD implementation for your book resource. Use a Docker container and the Payara Server micro edition to run everything. Copy your WAR file to the deployments directory and then you’re up and running:

FROM payara/micro:5-SNAPSHOT

COPY target/library-service.war /opt/payara/deployments

See if everything’s running on your REST client (Postman). First, get a list of books. As you can see here, this works as expected:

If you want to create a new book, issue the POST and create new book request, and you’ll see a status code of OK 200. Get the new book using GET new book; this is the book you just created, as shown in the following screenshot:

Update the book using Update new book, and you’ll get a status code of OK 200. You can get the updated book using GE new bookT. Get the updated title, as shown in the following screenshot:

Finally, you can delete the book. When you get the list of books, your newly created book is not a part of it anymore.

If you found this article interesting, you can explore Building RESTful Web Services with Java EE 8 to learn the fundamentals of Java EE 8 APIs to build effective web services. Building RESTful Web Services with Java EE 8 also guides you in leveraging the power of asynchronous APIs on the server and client side, and you will learn to use server-sent events (SSEs) for push communication.

 

Comparing Golang and Understanding Value Types BaselOne

Comparing Golang and Understanding Value Types BaselOne Video

I was invited again to deliver my talk comparing Go language with Java and to talk a bit about value types. This time it was in Basel at the BaselOne conference. This one day conference happens once a year in Basel at the MarkHalle. This is a kind of shopping mall close to the railway station. The place is not too posh so you will not feel uncomfortable walking around in the typical developer outfits. At the same time, there are a lot of food places and the conference organizers provided coupons valid for some food. There were three rooms for parallel talks. One of the rooms is actually a café and you can really buy coffee before the talk and later listen to the talk while sipping your mocha. The audience was composed mainly of local developers from within Switzerland. The organization of the conference is very much related to the Swiss Java Users’ Group. There was no official video recording of the conference but organizers welcomed my recording and gave me permission to publish the video, so here it goes.

I have edited the video I recorded during the conference where I presented Comparing Golang and Understanding Value Types. The video shows the slides and, just for the sake of completeness and to increase the enjoyment factor, my slender myself presenting in PIP.

I delivered the talk also in May in Vilnius and before that April, the same year in Mainz at W-JAX. Both of those times the talk was disturbed some external noise. This time we had nothing like that. I almost started to miss it. (not really)

Now that JAX also published the video

https://vimeo.com/jaxtv/review/288743607/ce57328338

You can compare the three, how different the same talk can be. (Seriously, I do not think it is interesting for anyone.) Here is the Vilnius conference video:

JEP 181 incompatibility, nesting classes / 2

JEP 181 is a nest based access control https://openjdk.java.net/jeps/181. It was introduced in Java 11 and it deliberately introduced an incompatibility with previous versions. This is a good example that being compatible with prior versions of Java is not a rule carved into stone but it rather is to keep the consistency and steady development of the language. In this article, I will look at the change through an example that I came across a few years ago and how Java 11 makes life easier and more consistent in this special case.

Java backward compatibility is limited to features and not to behavior

Original Situation

A few years ago when I wrote the ScriptBasic for Java interpreter that can be extended with Java methods, so that they are available just as if they were written in BASIC I created some unit tests. The unit test class contained some inner class that had some method in it available for the BASIC code. The inner class was static and private as it had nothing to do with any other classes except the test, however, the class and the methods were still accessible to the test code because they resided in the same class. To my dismay, the methods were not accessible via the BASIC programs. When I tried to call the methods through the BASIC interpreter, which itself was using reflective access I got IllegalAccessException.

To rectify the situation I created the following simple code after a few hours of debugging and learning:

package javax0;

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;

public class ReflThrow {
    private class Nested {
        private void m(){
            System.out.println("m called");
        }
    }
    public static void main(String[] args)
            throws NoSuchMethodException,
            InvocationTargetException,
            IllegalAccessException {
        ReflThrow me = new ReflThrow();
        Nested n = me.new Nested();
        n.m();
        Method m = Nested.class.getDeclaredMethod("m");
        m.invoke(n);
    }
}

If you run this code with Java N where N < 11 then you get something similar to this:

m called
Exception in thread "main" java.lang.IllegalAccessException: class ReflThrow cannot access a member of class ReflThrow$Nested with modifiers "private"
    at java.base/jdk.internal.reflect.Reflection.throwIllegalAccessException(Reflection.java:423)
    at java.base/jdk.internal.reflect.Reflection.throwIllegalAccessException(Reflection.java:414)
...

It works, however, fine using Java 11 (and presumably it will also work fine with later versions of Java).

Explanation

Up to version 11 of Java the JVM did not handle inner and nested classes. All classes in the JVM are top-level classes. The Java compiler creates a specially named top-level class from the inner and nested classes. For example one of the Java compilers may create the class files ReflThrow.class and ReflThrow$Nested.class. Because they are top level classes for the JVM the code in the class ReflThrow cannot invoke the private method m() of Nested when they are two different top-level classes.

On the Java level, however, when these classes are created from a nested structure it is possible. To make it happen the compiler creates an extra synthetic method inside the class Nested that the code in ReflThrow can call and this method already inside Nested calls m().

The synthetic methods have the modifier SYNTHETIC so that the compiler later knows that other code should not “see” those methods. That way invoking the method m() works nicely.
On the other hand, when we try to call the method m() using its name and reflective access the route goes directly through the class boundaries without invoking any synthetic method, and because the method is private to the class it is in, the invocation throws the exception.

Java 11 changes this. The JEP 181 incorporated into the already released Java 11 introduces the notion nest. “Nests allow classes that are logically part of the same code entity, but which are compiled to distinct class files, to access each other’s private members without the need for compilers to insert accessibility-broadening bridge methods.” It simply means that there are classes which are nests and there are classes which belong to a nest. When the code is generated from Java then the top level class is the nesting class and the classes inside are nested. This structure on the JVM level leaves a lot of room for different language structures and does not put an octroi of a Java structure on the execution environment. The JVM is aimed to be polyglot and it is going to be even “more” polyglot with the introduction of the GraalVM in the future. The JVM using this structure simply sees that two classes are in the same nest, thus they can access each other private methods, fields and other members. This also means that there are no bridge methods with different access restrictions and that way reflection goes through exactly the same access boundaries as the normal Java call.

Summary / Takeaway

Java does not change overnight and is mostly backward compatible. Backward compatibility is, however, limited to features and not to behavior. The JEP181 did not, and it never actually intended to reproduce the not absolutely perfect IllegalAccessException throwing behavior of the reflective access to nested classes. This behavior was rather an implementation behavior/bug rather than a language feature and was in Java 11 fixed.

Nesting Java classes

Nested classes and private methods

When you have a class inside another class they can see each others private methods. It is not well known among Java developers. Many candidates during interviews say that private is a visibility that lets a code see a member if that is in the same class. This is actually true, but it would be more precise to say that there is a class that both the code and the member is in. When we have nested and inner classes it can happen that the private member and the code using it is in the same class and at the same time they are also in different classes.

As an example, if I have two nested classes in a top-level class then the code in one of the nested classes can see a private member of the other nested class.

It starts to be interesting when we look at the generated code. The JVM does not care about classes inside other classes. It deals with JVM “top-level” classes. The compiler will create .class files that will have a name like A$B.class when you have a class named B inside a class A. There is a private method in B callable from A then the JVM sees that the code in A.class calls the method in A$B.class. The JVM checks access control. When we discussed this with juniors somebody suggested that probably the JVM does not care the modifier. That is not true. Try to compile A.java and B.java, two top-level classes with some code in A calling a public method in B. When you have A.class and B.class modify the method in B.java from being public to be private and recompile B t a new B.class. Start the application and you will see that the JVM cares about the access modifiers a lot. Still, you can invoke in the example above from A.class a method in A$B.class.

To resolve this conflict Java generates extra synthetic methods that are inherently package private, call the original private method inside the same class and are callable as far as the JVM access control is considered. On the other hand, the Java compiler will not compile the code if you figure out the name of the generated method and try to call in from the Java source code directly. I wrote about in details more than 4 years ago https://javax0.wordpress.com/2014/02/26/syntethic-and-bridge-methods/.

If you are a seasoned developer then you probably think that this is a weird and revolting hack. Java is so clean, elegant, concise and pure except this hack. And also perhaps the hack of the Integer cache that makes small Integer objects (typical test values) to be equal using the == while larger values are only equals() but not == (typical production values). But other than the synthetic classes and Integer cache hack Java is clean, elegant, concise and pure. (You may get I am a Monty Python fan.)

The reason for this is that nested classes were not part of the original Java, it was added only to version 1.1 The solution was a hack, but there were more important things to do at that time, like introducing JIT compiler, JDBC, RMI, reflection and some other things that we take today for granted. That time the question was not if the solution is nice and clean. Rather the question was if Java will survive at all and be a mainstream programming language or dies and remains a nice try. That time I was still working as a sales rep and coding was only a hobby because coding jobs were scarce in East Europe, they were the mainly boring bookkeeping applications and were low paid. Those were a bit different times, the search engine was named AltaVista, we drank water from the tap and Java had different priorities.

The consequence is that for more than 20 years we are having slightly larger JAR files, slightly slower java execution (unless the JIT optimizes the call chain) and obnoxious warnings in the IDE suggesting that we better have package protected methods in nested classes instead of private when we use it from top-level or other nested classes.

Nest Hosts

Now it seems that this 20-year technical debt will be solved. The http://openjdk.java.net/jeps/181 gets into Java 11 and it will solve this issue by introducing a new notion: nest. Currently, the Java bytecode contains some information about the relationship between classes. The JVM has information that a certain class is a nested class of another class and this is not only the name. This information could work for the JVM to decide on whether a piece of code in one class is allowed or is not allowed to access a private member of another class, but the JEP-181 development has something more general. As times changed JVM is not the Java Virtual Machine anymore. Well, yes, it is, at least the name, however, it is a virtual machine that happens to execute bytecode compiled from Java. Or for the matter from some other languages. There are many languages that target the JVM and keeping that in mind the JEP-181 does not want to tie the new access control feature of the JVM to a particular feature of the Java language.

The JEP-181 defines the notion of a NestHost and NestMembers as attributes of a class. The compiler fills these fields and when there is access to a private member of a class from a different class then the JVM access control can check: are the two classes in the same nest or not? If they are in the same nest then the access is allowed, otherwise not. We will have methods added to the reflective access so we can get the list of the classes that are in a nest.

Simple Nest Example

Using the

$ java -version
java version "11-ea" 2018-09-25
Java(TM) SE Runtime Environment 18.9 (build 11-ea+25)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11-ea+25, mixed mode)

version of Java today we can make already experiments. We can create a simple class:

package nesttest;
public class NestingHost {
    public static class NestedClass1 {
        private void privateMethod() {
            new NestedClass2().privateMethod();
        }
    }
    public static class NestedClass2 {
        private void privateMethod() {
            new NestedClass1().privateMethod();
        }
    }
}

Pretty simple and it does nothing. The private methods call each other. Without this the compiler sees that they simply do nothing and they are not needed and the byte code just does not contain them.
The class to read the nesting information

package nesttest;

import java.util.Arrays;
import java.util.stream.Collectors;

public class TestNest {
    public static void main(String[] args) {
        Class host = NestingHost.class.getNestHost();
        Class[] nestlings = NestingHost.class.getNestMembers();
        System.out.println("Mother bird is: " + host);
        System.out.println("Nest dwellers are :\n" +
                Arrays.stream(nestlings).map(Class::getName)
                      .collect(Collectors.joining("\n")));
    }
}

The printout is as expected:

Mother bird is: class nesttest.NestingHost
Nest dwellers are :
nesttest.NestingHost
nesttest.NestingHost$NestedClass2
nesttest.NestingHost$NestedClass1

Note that the nesting host is also listed among the nest members, though this information should be fairly obvious and redundant. However, such a use may allow some languages to disclose from the access the private members of the nesting host itself and let the access allow only for the nestlings.

Byte Code

The compilation using the JDK11 compiler generates the files

  • NestingHost$NestedClass1.class
  • NestingHost$NestedClass2.class
  • NestingHost.class
  • TestNest.class

There is no change. On the other hand if we look at the byte code using the javap decompiler then we will see the following:

$ javap -v build/classes/java/main/nesttest/NestingHost\$NestedClass1.class
Classfile .../packt/Fundamentals-of-java-18.9/sources/ch08/bulkorders/build/classes/java/main/nesttest/NestingHost$NestedClass1.class
  Last modified Aug 6, 2018; size 557 bytes
  MD5 checksum 5ce1e0633850dd87bd2793844a102c52
  Compiled from "NestingHost.java"
public class nesttest.NestingHost$NestedClass1
  minor version: 0
  major version: 55
  flags: (0x0021) ACC_PUBLIC, ACC_SUPER
  this_class: #5                          // nesttest/NestingHost$NestedClass1
  super_class: #6                         // java/lang/Object
  interfaces: 0, fields: 0, methods: 2, attributes: 3
Constant pool:

*** CONSTANT POOL DELETED FROM THE PRINTOUT ***

{
  public nesttest.NestingHost$NestedClass1();
    descriptor: ()V
    flags: (0x0001) ACC_PUBLIC
    Code:
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method java/lang/Object."<init>":()V
         4: return
      LineNumberTable:
        line 6: 0
      LocalVariableTable:
        Start  Length  Slot  Name   Signature
            0       5     0  this   Lnesttest/NestingHost$NestedClass1;
}
SourceFile: "NestingHost.java"
NestHost: class nesttest/NestingHost
InnerClasses:
  public static #13= #5 of #20;           // NestedClass1=class nesttest/NestingHost$NestedClass1 of class nesttest/NestingHost
  public static #23= #2 of #20;           // NestedClass2=class nesttest/NestingHost$NestedClass2 of class nesttest/NestingHost

If we compile the same class using the JDK10 compiler, then the disassembles lines are the following:

$ javap -v build/classes/java/main/nesttest/NestingHost\$NestedClass1.class
Classfile /C:/Users/peter_verhas/Dropbox/packt/Fundamentals-of-java-18.9/sources/ch08/bulkorders/build/classes/java/main/nesttest/NestingHost$NestedClass1.class
  Last modified Aug 6, 2018; size 722 bytes
  MD5 checksum 8c46ede328a3f0ca265045a5241219e9
  Compiled from "NestingHost.java"
public class nesttest.NestingHost$NestedClass1
  minor version: 0
  major version: 54
  flags: (0x0021) ACC_PUBLIC, ACC_SUPER
  this_class: #6                          // nesttest/NestingHost$NestedClass1
  super_class: #7                         // java/lang/Object
  interfaces: 0, fields: 0, methods: 3, attributes: 2
Constant pool:

*** CONSTANT POOL DELETED FROM THE PRINTOUT ***

{
  public nesttest.NestingHost$NestedClass1();
    descriptor: ()V
    flags: (0x0001) ACC_PUBLIC
    Code:
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #2                  // Method java/lang/Object."<init>":()V
         4: return
      LineNumberTable:
        line 6: 0
      LocalVariableTable:
        Start  Length  Slot  Name   Signature
            0       5     0  this   Lnesttest/NestingHost$NestedClass1;

  static void access$100(nesttest.NestingHost$NestedClass1);
    descriptor: (Lnesttest/NestingHost$NestedClass1;)V
    flags: (0x1008) ACC_STATIC, ACC_SYNTHETIC
    Code:
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method privateMethod:()V
         4: return
      LineNumberTable:
        line 6: 0
      LocalVariableTable:
        Start  Length  Slot  Name   Signature
            0       5     0    x0   Lnesttest/NestingHost$NestedClass1;
}
SourceFile: "NestingHost.java"
InnerClasses:
  public static #14= #6 of #25;           // NestedClass1=class nesttest/NestingHost$NestedClass1 of class nesttest/NestingHost
  public static #27= #3 of #25;           // NestedClass2=class nesttest/NestingHost$NestedClass2 of class nesttest/NestingHost

The Java 10 compiler generates the access$100 method. The Java 11 compiler does not. Instead, it has a nesting host field in the class file. We finally got rid of those synthetic methods that were causing surprises when listing all the methods in some framework code reflective.

Hack the nest

Let’s play a bit cuckoo. We can modify the code a bit so that now it does something:

package nesttest;
public class NestingHost {
//    public class NestedClass1 {
//        public void publicMethod() {
//            new NestedClass2().privateMethod(); /* <-- this is line 8 */
//        }
//    }

    public class NestedClass2 {
        private void privateMethod() {
            System.out.println("hallo");
        }
    }
}

we also create a simple test class

package nesttest;

public class HackNest {

    public static void main(String[] args) {
//        var nestling =new NestingHost().new NestedClass1();
//        nestling.publicMethod();
    }
}

First, remove all the // from the start of the lines and compile the project. It works like charm and prints out hallo. After this copy the generated classes to a safe place, like the root of the project.

$ cp build/classes/java/main/nesttest/NestingHost\$NestedClass1.class .
$ cp build/classes/java/main/nesttest/HackNest.class .

Let’s compile the project, this time with the comments and after this copy back the two class files from the previous compilation:

$ cp HackNest.class build/classes/java/main/nesttest/
$ cp NestingHost\$NestedClass1.class build/classes/java/main/nesttest/

Now we have a NestingHost that knows that it has only one nestling: NestedClass2. The test code, however, thinks that there is another nestling NestedClass1 and it also has a public method that can be invoked. This way we try to sneak an extra nestling into the nest. If we execute the code then we get an error:

$ java -cp build/classes/java/main/ nesttest.HackNest
Exception in thread "main" java.lang.IncompatibleClassChangeError: Type nesttest.NestingHost$NestedClass1 is not a nest member of nesttest.NestingHost: current type is not listed as a nest member
        at nesttest.NestingHost$NestedClass1.publicMethod(NestingHost.java:8)
        at nesttest.HackNest.main(HackNest.java:7)

It is important to recognize from the code that the line, which causes the error is the one where we want to invoke the private method. The Java runtime does the check only at that point and not sooner.

Do we like it or not? Where is the fail-fast principle? Why does the Java runtime start to execute the class and check the nest structure only when it is very much needed? The reason, as many times in the case of Java: backward compatibility. The JVM can check the nest structure consistency when all the classes are loaded. The classes are only loaded when they are used. It would have been possible to change the classloading in Java 11 and load all the nested classes along with the nesting host, but it would break backward compatibility. If nothing else the lazy singleton pattern would break apart and we do not want that. We love singleton, but only when single malt (it is).

Conclusion

The JEP-181 is a small change in Java. Most of the developers will not even notice. It is a technical debt eliminated and if the core Java project does not eliminate the technical debt then what should we expect from the average developer?

As the old Latin saying says: “Debitum technica necesse est deletur.”

Update

Brian Goetz 2018-09-06 on https://dzone.com/articles/nesting-java-classes :
“You theorize that the motivation for the lazy check is compatibility, but there is a much simpler explanation: this is yet another example of Java’s pervasive commitment to dynamic linkage. References to other classes and class members are only checked for validity and accessibility when they are actually needed, such as when a method is called, or a class is extended. This is just more of the same — nothing new here.”

HTTP/2 Server Push

The new version of the HTTP protocol, HTTP/2 lets the server to push content to the client before the client requests the particular content. There are many other modifications in the protocol if we compare the previous version 1.1 with the new version 2, but in this article, I will focus on the push functionality. I will discuss briefly how it can be used in a servlet, and I will also discuss a bit about how to test and see that it really works at all or not. Before writing this article my original intention was to create a demonstration of HTTP/2 showing how faster the sample page load is with the push than it is without. It is going to be one chapter in my video tutorial that is published by PACKT. During the development of the sample application I faced several problems, I have read some tutorials and debugged the sample code a bit gathering experience. In this article, I share this experience with you. That way this article is a bit more than just a simple introductory tutorial. Nevertheless, it is also a bit longer, so TLDR; if you are impatient.

HTTP versions

HTTP/2 is a new version of the HTTP protocol. The protocol had three versions prior to 2. They were 0.9, 1.0 and 1.1. The first one was only an experiment starting in 1991. The first real version was 1.0 released in 1996. This was the version that you probably met if you were using the internet that time and you still remember the Mosaic browser. This version was soon followed by the version 1.1 next year, 1997. The major difference between 1.0 and 1.1 was the Host header field that made it possible to operate several websites on one machine, one server, one IP address, and one port.

HTTP 1.1

Both versions 1.0 and 1.1 are extremely simple. The client opens a TCP channel to the server and writes the request into it as a plain text. The request starts with a request line, it is followed by header lines, an empty line and the body of the request. The body may be binary. The response has the same structure except the first line is not a request specific line, but rather a response obviously.

Imho, the simple approach was extremely important these days because it was inevitable to aid the spread and use of the protocol. You can find this type of pattern in the industry in different areas. This is how the world is developing in an agile way. First, you develop something simple that works and then you go on making it more and more complex as the environment requires it to be more powerful longing for more complexity. Making a long-shot and aiming for the complex and perfect solution the first time usually does not work. If you remember the film series Star Wars you know that death stars are never finished and they are razed at the end. But it does not mean that the simple version that we start with has no problems.

HTTP/1.1 problems

HTTP/1.1 had a lot of problems. A real Englishman could say it was far from optimal using the network. It was wasting bandwidth. Early times when HTTP was invented a web page was text. Today it is text, CSS, JavaScript, images, sounds, videos and digitized smell, taste, and touch samples not to mention the direct neural stimulation command files. These last content types following videos in this taxative list are still rare but you should be prepared and that is what HTTP/2 is aiming. Preparing for the future. With HTTP/1.1 the browser downloads these resources one after the other in separate TCP channels. The number of the TCP channels the server or for that matter the client can handle is finite, therefore the browsers limit themselves not to open more than four TCP channels. It means that while four content elements for the page is downloading the other elements wait in a queue to be downloaded. If we have some very slow downloading content it may choke the download of the whole web page. You can run experiments with that writing some simple servlet that sends a complex HTML page to the browser and then you let some of the content elements download slow and others fast. Doing it in debug mode using Chrome you will get a nice Gantt chart of the download timings.

The network is also wasted creating a new TCP channel for each new HTTP connection. Creating a TCP channel require an SYN package traveling from the client to the server, then an SYN-ACK from the server to the client and then an ACK from the client to the server again before we can start sending data. In addition to these, the TCP protocol limits itself not to flood a lossy channel with a lot of data that is going to be lost anyway, so it starts slow, sending only a few data packages at the start and increases the speed only when it sees that packages are arriving in good shape and content. Let’s just think about that three-way handshake that starts the TCP channel. The network between the client and the server has a certain lag and has a certain bandwidth. The lag is the time needed for a package to travel from the client to the server or the other way around. The bandwidth is the number of bits that can be pushed through the network between the client and the server during a given time (like one second). Imagine a hose that you use to fill buckets. If it is long it may take two or three seconds until the water starts to pour after you opened the tap. If it is narrow filling a bucket can take a minute. In that case, 2-3 seconds at the start is not a big deal, you have to wait a minute anyway. On the other hand, if the hose is wide the water may run in amount filling the bucket in ten seconds. Three seconds delay is 30%. It is significant. Similarly the TCP channel slow-start lag is significant in fast networks, where the bandwidth is abundant. Fortunately, we go in this direction. We get fibers in our homes replacing the 19.2kbps dial-up modems. However, at the same time, HTTP/1.1 lag is an increasing problem.

There is a header field Keep-alive that can tell the server not to close the TCP channel, but rather reuse it for the next HTTP request and this patch on the scar of HTTP/1.1 helps a lot, but not enough. The blocking slow resource problem is still there. There are other problems with HTT/1.1 that HTTP/2 addresses but this, using many TCP channels to download several content pieces to the client from the same server is the main issue related to server push, which is indeed the topic of this article.

HTTP/2

HTTP/2 uses a single TCP channel between the server and the client. When the content is downloaded from different servers the client will eventually open separate TCP channels to each server, but for the content pieces that come from the same server, HTTP/2 uses only one TCP channel. Using multiple TCP channels between a certain client and a single server does not speed up the communication, it was only a workaround in HTTP/1.1 to partially mitigate the choking effect of slow resources.

A request and a corresponding response do not use exclusively the TCP channel in HTTP/2. There are frames and the request and the response travels in these frames. If a content piece is created slowly and does not use the channel for some time then other resources can get frames and can travel in the same TCP channel. This is a significant boost of download speed in many cases. The change is also transparent for the browser programs being either JavaScript or WebAssembly. In the case of WebAssembly, the change is extremely simple: WebAssembly does not directly handle XMLHttpRequest, it uses JavaScript implemented and imported functions. In the case of JavaScript, the browser hides the transport complexity. JavaScript network API is just the same as it was before. You request a resource, you get one and you do not need to care if that was traveling in its exclusive TCP channel, mixed into HTTP/2 frames or was sent to you by pigeon post. On the server side, the approach is the same. In case of Java, a servlet application gets a request and creates a response. It is up to the container, the web server and to the client browser how they make it travel through the net.

The only difference, where server application may change is server push. This is a new feature and the new API gives an extra possibility for a server application to initiate a content push to the client.

When to Push, What to Push?

The server push typically can be used in a situation when the application prepares some content slowly and the application knows that this is going to be slow and it also knows that there are other resources, typically many small icon images that can be downloaded fast and will be needed by the client. Thus the conditions for the server push are (PRECONDITIONS)

  • the content requested is slow,
  • the application knows it is going to be slow,
  • there are other resources that will be needed by the client after this resource was downloaded,
  • the application knows what these resources are,
  • these resources can be downloaded fast.

In this case, the servlet can initiate one or more push that may deliver the content to the browser while the main content is prepared. When the main content is ready, and the client browser realizes that it needs the extra resources they are already there. If we do not push the resources they will be downloaded only after the original resource was processed when the browser realizes that the extra resources are needed. In a simple example, when the browser sees all the img tags and knows that it needs the icon images to render the page. This is what the above animation tries to show in a simple way.

How to push

To initiate a push it is more than simple. The servlet standard 4.0 extends the HttpServletRequest to create a new PushBuilder whenever the servlet calls req.newPushBuilder() on the request object req. The push builder can be used to create a push and then invoking the method push() on it will initiate the sending towards the client. It is as simple as that. The only parameter that you have to set is the path of the resource to be pushed.

var builder = req.newPushBuilder();
...
builder.path(s).push();

Sample application

The sample application to test server push is a servlet that responds with an HTML page that contains one hundred image references in a 10×10 table. Essentially these are small icons from the http://www.flaticon.com website

The first thing the servlet does is to initiate the icon downloads via server push. To do that it creates one push builder and this single object is used to initiate 100 pushes. After that, the servlet goes to sleep. This sleep simulates the slow inner working of the servlet response. A real application in this time would gather the information needed to send the response from the different database, from other services and so on. During this time the server and the client have enough time to download the PNG files. When the response arrives the files are there and the images are displayed instantaneously. At least that is what we expect.

The servlet has a parameter, named push that can be 1 or 0. If this parameter is 1 the servlet pushes the PNG files to the client, it is 0 then it is not. This way we can easily compare the speed of the two different behaviors.

    public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
        final String titleText;
        var builder = req.newPushBuilder();
        var etag = now();
        if (builder == null) {
            titleText = "Good old HTTP/1.1 download";
        } else {
            var pushRequested = parse(req).get("push", 1) == 1;
            if (pushRequested) {
                titleText = "HTTP/2 Push";
                sendPush(req, builder, etag);
            } else {
                titleText = "HTTP/2 without push";
            }
        }
        var lag = parse(req).get("lag", 5000);
        var delay = parse(req).get("delay", 0);
        var sleep = new ThrottleTool.Sleeper();
        sleep.till(lag);
        resp.setContentType("text/html");
        sendHtml(req, resp, titleText, etag, delay, sleep);
    }

The servlet can also be parametrized with query parameters lag and delay.

lag is the number of milliseconds the servlet sleeps counting from the start of the servlet before it starts to send the HTML page to the client. The default value is 5000, which means that the HTML sending will start 5seconds after the servlet started.

delay is the number of milliseconds the servlet sleeps between each image tags. The default value is zero, that means the servlet sends the HTML as fast as it can send.

For the push, it is interesting how we get the builder object. The line

var builder = req.newPushBuilder();

returns a new builder object that can also be null. It is null if the environment does not allow push. The simplest case is when the servlet is queried through normal HTTP and not HTTPS. HTTP/2 works only over TLS secure channel and that way if we open the servlet via HTTP it will not be able to push anything.

After this, the method sendPush() sends the push contents as the name implies. Here is the method:

    private void sendPush(HttpServletRequest req, PushBuilder builder, long etag) {
        for (var i = 0; i < 10; i++) {
            for (var j = 0; j < 10; j++) {
                var s = imagePath(i, j, req, etag);
                builder.path(s);
                builder.push();
            }
        }
    }

The method imagePath() calculates the relative URL for the png based on the indices and this path is specified through the push builder. The builder is finally asked to push the content. This call to push() initiates the push on the server.

The builder is used literally one hundred times in this example. We do not need new builder objects for each push. We can safely reuse the same objects. The only requirement is that we set all the parameters before the next push we need. This is usually only the path.

Image Server

The icon images are not directly served by the server in the demo application. The URL for the icon resources is mapped to a servlet that reads the icon PNG from the disk and writes it to the response. The reason for it is twofold. The chronologically first reason is that during the application debugging I needed information when and how the resource is collected and in case the resource is just a plain file that the server directly handles I do not have many possibilities to debug or log anything. The second reason is that the demo needs network throttling. Just as in case of the main HTML resource that waits 5 seconds before it vomits out the HTML text images also need to slow down to show a good demonstration effect. There is a throttling functionality in the debug mode of the browsers. However, it seems that this throttling in Chrome is architecturally between the cache where the pushed resources are temporarily stored and the DOM displaying engine. Implementing throttling on the demo server in our code is certainly at a good place from the experiment point of view and it can not happen that the already pushed and downloaded resource is loading throttled into the screen.

For this reason, the image servlet has two query parameters. imglag is the number of milliseconds the servlet waits after a hit arrives and before it starts to do anything. The default value in case the parameter is missing is 300 milliseconds. The other parameter is imgdelay that specifies the number of milliseconds the server consumes sending the bytes of the image to the client. This is implemented in a way that the server sends each byte individually one after the other and in case the current time is proportionally too soon to serve the next byte then the server sleeps a bit. The default value is 1000, which means that each icon is delivered from the server in approximately 1.3sec. The code itself is not too educational from the server push point of view. You can see it in the GitHub repository https://github.com/verhas/pushbuilder/blob/master/src/main/java/javax0/pushbuilder/demo/ImageServer.java

PushBuilder methods

The PushBuilder class has many other methods in case you need to build a request that has some special characteristics. When you initiate the push of a resource you actually define an HTTP request, for which the resource would be the answer. The actual resource is provided by the server. If it is a normal static resource, like an image then the server will pick it up from the file system. If it is some dynamically calculated resource, then the server will start the servlet that is targeted by the request. What the push builder really builds up is a request that stays and is used on the server and the client will actually never issue this request after the resource was already pushed. The setter methods of the class set request attribute, like parameters, headers etc.

method()

You can set the method of the request that the resource is a response to. By default, this is GET and this should be OK most of the time. If the resource is a response to a
POST, PUT, DELETE, CONNECT, OPTIONS or TRACErequest then we can not push the resource anyway because the HTTP/2 standard does not allow resources of those kinds to be pushed. What remains is the HEAD method that does not seem to be meaningful. I believe that the method method() is in the interface

  • for compatibility reason if ever there will be some new method,
  • for the sake of some application that uses some proprietary non-standard method
  • or as a joke (a method called method).

sessionId()

This method can be used to set the session id that is usually carried in the JSESSIONID cookie or in a request parameter.

set, add and remove + Header()

You can modify the header of the request using these methods. When you invoke setHeader() the previous value is supposed to be replaced. The addHeader() adds a new header. When you reuse the same object adding the same header again and again may result the same header in the request multiple times. Finally the removeHeader() removes a header.

Note that some tutorials also add the header Content-type and the value in some of the tutorials is image/png. This is erroneous even though it does not do any harm. The headers we set on the push builder are used to request the resource. In our example, the images are not directly accessed by Tomcat, but rather are delivered through a servlet. This servlet reads the content of the PNG file from the disk and sends it to the writer that it acquires from the response object. This servlet will see the headers set in the push builder. If we set the content type to be image/png, it may think that we do are a special kind of stupid sending an HTTP GET request body that is PNG format. Usually, it does not matter, servlets and servers tend to ignore the content type for GET request especially when the content has zero bytes. What format does a non-existent picture have?

get methods

What you can set on the push builder object you can also get out. You can use these getHeader(), getMethod(), getHeaderNames(), getPath() and so on methods to see what the current value of the values is in the push builder.

Experiments

During the experiments, I could not see any difference between HTTP/2 with and without the push. The download using HTTP/2 was inevitably faster than HTTP/1.1, but there was no difference between the push and the non-push version. I used Chrome Version 67.0.3396.99 (Official Build) (64-bit). The browser supports the push functionality but the developer tools do not support the debugging of the push. Pushed contents are displayed as normally downloaded contents. There is some secret internal URL (well, not really a secret since it is published on the net), that will show the actual frames, but it is not too handy. What I could see there finally, that the push itself worked. The images were pushed to the browser while the main HTML page serving servlet was having its 5 seconds sleep, but after that, the browser was just downloading the images again, even if I switched on browser caching.

Since HTTP/2 was started by SPDY protocol and it was pushed by Google (pun indented (again)) I strongly believed that it is my code that does something wrong. Finally, I gave up and fired up a Firefox and here you go, this is what you can see on the screen captures:

Summary Takeaway

Server push is an interesting topic and a powerful technology. There are a lot of preconditions to use it (see listed above, search for “PRECONDITIONS”). If these are all met then you can think about implementing it, but you should do experiments with the different network setups, delays and with the different client implementations. If the client implementation is less than optimal then you may end up with a slower download with a push than without one. There is a lot of room for development in the clients. Some of these developments will just mature the way the browsers handle server pushed resource, but I am almost sure that soon we will also have JavaScript API that may register call-back functions to be triggered when a push starts so that it will not only be the browser autonomously who can refuse a push stream but also the client-side JavaScript. Keep the eyes and your mind open for the development of HTTP/2.

References