Author Archives: Peter Verhas

About Peter Verhas

Java expert, developer, architect, author, speaker, teacher, mentor and he very much likes to be alive.

Lazy assignment in Java

Programmers are inherently lazy and similis simili gaudet also like when the programs are lazy. Have you ever heard lazy loading? Or lazy singleton? (I personally prefer the single malt version though.) If you are programming in Scala or Kotlin, which is also a JVM language you can even evaluate expressions in a lazy way.

If you are programming in Scala you can write

lazy val z = "Hello"

and the expression will only be evaluated when z is accessed the first time. If you program in Kotlin you can write something like

val z: String by lazy { "Hello" }

and the expression will only be evaluated when z is accessed the first time.

Java does not support that lazy evaluation per se, but being a powerful language it provides language elements that you can use to have the same result. While Scala and Kotlin give you the fish, Java teaches you to catch your own fish. (Let’s put a pin in this thought.)

What really happens in the background, when you code the above lines in Scala and/or Kotlin, is that the expression is not evaluated and the variable will not hold the result of the expression. Instead, the languages create some virtual “lambda” expressions, a ‘supplier’ that will later be used to calculate the value of the expression.

We can do that ourselves in Java. We can use a simple class, Lazy that provides the functionality:

public class Lazy implements Supplier {

final private Supplier supplier;
private boolean supplied = false;
private T value;

private Lazy(Supplier supplier) {
this.supplier = supplier;
}

public static  Lazy let(Supplier supplier) {
return new Lazy(supplier);
}

@Override
public T get() {
if (supplied) {
return value;
}
supplied = true;
return value = supplier.get();
}
}

The class has the public static method let() that can be used to define a supplier and this supplier is invoked the first time the method get() is invoked. With this class, you can write the above examples as

var z = Lazy.let( () -> "Hello" );

By the way, it seems to be even simpler than the Kotlin version. You can use the class from the library:

com.javax0
lazylet
1.0.0

and then you do not need to copy the code into your project. This is a micro library that contains only this class with an inner class that makes Lazy usable in a multi-thread environment.

The use is simple as demonstrated in the unit tests:

private static class TestSupport {
int count = 0;

boolean callMe() {
count++;
return true;
}
}

...

final var ts = new TestSupport();
var z = Lazy.let(ts::callMe);
if (false && z.get()) {
Assertions.fail();
}
Assertions.assertEquals(0, ts.count);
z.get();
Assertions.assertEquals(1, ts.count);
z.get();
Assertions.assertEquals(1, ts.count);

To get the multi-thread safe version you can use the code:

final var ts = new TestSupport();
var z = Lazy.sync(ts::callMe);
if (false && z.get()) {
Assertions.fail();
}
Assertions.assertEquals(0, ts.count);
z.get();
Assertions.assertEquals(1, ts.count);
z.get();
Assertions.assertEquals(1, ts.count);

and get a Lazy supplier that can be used by multiple threads and it is still guaranteed that the supplier passed as argument is evaluated only once.

Giving you a fish or teaching you to fish

I said to put a pin in the note “While Scala and Kotlin give you the fish, Java teaches you to catch your own fish.” Here comes what I meant by that.

Many programmers write programs without understanding how the programs are executed. They program in Java and they write nice and working code, but they have no idea how the underlying technology works. They have no idea about the class loaders, garbage collections. Or they do, but they do not know anything about the machine code that the JIT compiler generates. Or they even do that but they have no idea about the processor caches, different memory types, hardware architecture. Or they know that but have no knowledge about microelectronics and lithography and how the layout of the integrated circuits are, how the electrons move inside the semiconductor, how quantum mechanics determines the non-deterministic inner working of the computer.

I do not say that you have to be a physicist and understand the intricate details of quantum mechanics to be a good programmer. I recommend, however, to understand a few layers below your everyday working tools. If you use Kotlin or Scala it is absolutely okay to use the lazy structures they provide. They give a programming abstraction one level higher than what Java provides in this specific case. But it is vital to know how the implementation probably looks like. If you know how to fish you can buy the packaged fish because then you can tell when the fish is good. If you do not know how to fish you will rely on the mercy of those who give you the fish.

Advertisements

Creating a Java::Geci generator

A few days back I wrote about Java::Geci architecture, code generation philosophy and the possible different ways to generate Java source code.

In this article, I will talk about how simple it is to create a generator in Java::Geci.

Hello, Wold generator

HelloWorld1

The simplest ever generator is a Hello, World! generator. This will generate a method that prints Hello, World! to the standard output. To create this generator the Java class has to implement the Generator interface. The whole code of the generator is:

package javax0.geci.tutorials.hello;

import javax0.geci.api.GeciException;
import javax0.geci.api.Generator;
import javax0.geci.api.Source;

public class HelloWorldGenerator1 implements Generator {
    public void process(Source source) {
        try {
            final var segment = source.open("hello");
            segment.write_r("public static void hello(){");
            segment.write("System.out.println(\"Hello, World\");");
            segment.write_l("}");
        } catch (Exception e) {
            throw new GeciException(e);
        }
    }
}

This really is the whole generator class. There is no simplification or deleted lines. When the framework finds a file that needs the method hello() then it invokes process().

The method process () queries the segment named “hello”. This refers to the lines

    //<editor-fold id="hello">
    //</editor-fold>

in the source code. The segment object can be used to write lines into the code. The method write() writes a line. The method write_r() also writes a line, but it also signals that the lines following this one have to be indented. The opposite is write_l() which signals that already this line and the consecutive lines should be tabbed back to the previous position.

To use the generator we should have a class that needs it. This is

package javax0.geci.tutorials.hello;

public class HelloWorld1 {
    //<editor-fold id="hello">
    //</editor-fold>
}

We also need a test that will run the code generation every time we compile the code and thus run the unit tests:

package javax0.geci.tutorials.hello;

import javax0.geci.engine.Geci;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Test;

import static javax0.geci.api.Source.maven;

public class TestHelloWorld1 {

    @Test
    @DisplayName("Start code generator for HelloWorld1")
    void testGenerateCode() throws Exception {
        Assertions.assertFalse(new Geci()
                .only("^.*/HelloWorld1.java$")
                .register(new HelloWorldGenerator1()).generate(), Geci.FAILED);
    }
}

When the code has executed the file HelloWorld1.java will be modified and will get the lines inserted between the editor folds:

package javax0.geci.tutorials.hello;

public class HelloWorld1 {
    //<editor-fold id="hello">
    public static void hello(){
        System.out.println("Hello, World");
    }
    //</editor-fold>
}

This is an extremely simple example that we can develop a bit further.

HelloWorld2

One thing that is sub-par in the example is that the scope of the generator is limited in the test calling the only() method. It is a much better practice to let the framework scan all the files and select the source files that themselves some way signal that they need the service of the generator. In the case of the “Hello, World!” generator it can be the existence of the hello segment as an editor fold in the source code. If it is there the code needs the method hello(), otherwise it does not. We can implement the second version of our generator that way. We also modify the implementation not simply implementing the interface Generator but rather extending the abstract class AbstractGeneratorEx. The postfix Ex in the name suggests that this class handles exceptions for us. This abstract class implements the method process() and calls the to-be-defined processEx() which has the same signature as process() but it is allowed to throw an exception. If that happens then it is encapsulated in a GeciException just as we did in the first example.

The code will look like the following:

package javax0.geci.tutorials.hello;

import javax0.geci.api.Source;
import javax0.geci.tools.AbstractGeneratorEx;

import java.io.IOException;

public class HelloWorldGenerator2 extends AbstractGeneratorEx {
    public void processEx(Source source) throws IOException {
        final var segment = source.open("hello");
        if (segment != null) {
            segment.write_r("public static void hello(){");
            segment.write("System.out.println(\"Hello, World\");");
            segment.write_l("}");
        }
    }
}

This is even simpler than the first one although it is checking the segment existence. When the code invokes source.open("hello") the method will return null if there is no segment named hello in the source code. The actual code using the second generator is the same as the first one. When we run both tests int the codebase they both generate code, fortunately identical.

The test that invokes the second generator is

package javax0.geci.tutorials.hello;

import javax0.geci.engine.Geci;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Test;

import static javax0.geci.api.Source.maven;

public class TestHelloWorld2 {

    @Test
    @DisplayName("Start code generator for HelloWorld2")
    void testGenerateCode() throws Exception {
        Assertions.assertFalse(new Geci()
                .register(new HelloWorldGenerator2())
                .generate(), Geci.FAILED);
    }
}

Note that this time we did not need to limit the code scanning calling the method only(). Also the documentation of the method only(RegEx x) says that this is in the API of the generator builder as a last resort.

HelloWorld3

The first and the second version of the generator are working on text files and do not use the fact that the code we modify is actually Java. The third version of the generator will rely on this fact and that way it will be possible to create a generator, which can be configured in the class that needs the code generation.

To do that we can extend the abstract class AbstractJavaGenerator. This abstract class finds the class that corresponds to the source code and also reads the configuration encoded in annotations on the class as we will see. The abstract class implementation of processEx() invokes the process(Source source, Class klass, CompoundParams global) only if the source code is a Java file, there is an already compiled class (sorry compiler, we may modify the source code now so there may be a need to recompile) and the class is annotated appropriately.

The generator code is the following:

package javax0.geci.tutorials.hello;

import javax0.geci.api.Source;
import javax0.geci.tools.AbstractJavaGenerator;
import javax0.geci.tools.CompoundParams;

import java.io.IOException;

public class HelloWorldGenerator3 extends AbstractJavaGenerator {
    public void process(Source source, Class<?> klass, CompoundParams global)
            throws IOException {
        final var segment = source.open(global.get("id"));
        final var methodName = global.get("methodName", "hello");
        segment.write_r("public static void %s(){", methodName);
        segment.write("System.out.println(\"Hello, World\");");
        segment.write_l("}");
    }

    public String mnemonic() {
        return "HelloWorld3";
    }
}

The method process() (an overloaded version of the method defined in the interface) gets three arguments. The first one is the very same Source object as in the first example. The second one is the Class that was created from the Java source file we are working on. The third one is the configuration that the framework was reading from the class annotation. This also needs the support of the method mnemonic(). This identifies the name of the generator. It is a string used as a reference in the configuration. It has to be unique.

A Java class that needs itself to be modified by a generator has to be annotated using the Geci annotation. The Geci annotation is defined in the library javax0.geci.annotations.Geci. The code of the source to be extended with the generated code will look like the following:

package javax0.geci.tutorials.hello;

import javax0.geci.annotations.Geci;

@Geci("HelloWorld3 id='hallo' methodName='hiya'")
public class HelloWorld3 {
    //<editor-fold id="hallo">
    //</editor-fold>
}

Here there is a bit of a nuisance. Java::Geci is a test phase tool and all the dependencies to it are test dependencies. The exception is the annotations library. This library has to be a normal dependency because the classes that use the code generation are annotated with this annotation and therefore the JVM will look for the annotation class during run time, even though there is no role of the annotation during run-time. For the JVM test execution is just a run-time, there is no difference.

To overcome this Java::Geci lets you use any annotations so long as long the name of the annotation interface is Geci and it has a value, which is a String. This way we can use the third hello world generator the following way:

package javax0.geci.tutorials.hello;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@HelloWorld3a.Geci(value = "HelloWorld3 id='hallo'", methodName = "hiyaHuya")
public class HelloWorld3a {
    //<editor-fold id="hallo">
    //</editor-fold>

    @Retention(RetentionPolicy.RUNTIME)
    @interface Geci {
        String value();

        String methodName() default "hello";
    }
}

Note that in the previous example the parameters id and methodName were defined inside the value string (which is the default parameter if you do not define any other parameters in an annotation). In that case, the parameters can easily be misspelled and the IDE does not give you any support for the parameters simply because the IDE does not know anything about the format of the string that configures Java::Geci. On the other hand, if you have your own annotations you are free to define any named parameters. In this example, we defined the method methodName in the interface. Java::Geci is reading the parameters of the annotation as well as parsing the value string for parameters. That way some generators may use their own annotations that help the users with the parameters defined as annotation parameters.

The last version of our third “Hello, World!” application is perhaps the simplest:

package javax0.geci.tutorials.hello;

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

public class HelloWorld3b {
    //<editor-fold id="HelloWorld3" methodName = "hiyaNyunad">
    //</editor-fold>
}

There is no annotation on the class, and there is no comment that would look like an annotation. The only thing that is there an editor-fold segment that has the id HelloWorld3, which is the mnemonic of the generator. If it exists there, the AbstractJavaGenerator realizes that and reads the parameters from there. (Btw: it reads extra parameters that are not present on the annotation even if the annotation is present.) And not only reads the parameters but also calls the concrete implementation, so the code is generated. This approach is the simplest and can be used for code generators that need only one single segment to generate the code into, and when they do not need separate configuration options for the methods and fields that are in the class.

Summary

In this article, I described how you can write your own generator and we also delved into how the annotations can be used to configure the class that needs generated code. Note that some of the features discussed in this article may not be in the release version but you can download and build the (b)leading edge version from https://github.com/verhas/javageci.

Handling exceptions functional style

Java supports checked exceptions from the very start. With Java 8 the language element lambda and the RT library modifications supporting stream operations introduced functional programming style to the language. Functional style and exceptions are not really good friends. In this article, I will describe a simple library that handles exceptions somehow similar to how null is handled using Optional.

The library works (after all it is a single Class and some inner classes, but really not many). On the other hand, I am not absolutely sure that using the library will not deteriorate the programming style of the average programmer. It may happen that someone having a hammer sees everything as a nail. A hammer is not a good pedicure tool. Have a look at this library more like an idea and not as a final tool that tells you how to create perfect code handling exceptions.

Also, come and listen to the presentation of Michael Feathers about exceptions May 6, 2019, Zürich https://www.jug.ch/html/events/2019/exceptions.html

Handling Checked Exception

Checked exceptions have to be declared or caught like a cold. This is a major difference from null. Evaluating an expression can silently be null but it cannot silently throw a checked exception. When the result is null then we may use that to signal that there is no value or we can check that and use a “default” value instead of null. The code pattern doing that is

var x = expression;
if( expression == null ){
  x = default expression that is really never null
}

The pattern topology is the same in case the evaluation of the expression can throw a checked exception, although the Java syntax is a bit different:

Type x; // you cannot use 'var' here
try{
  x = expression
}catch(Exception weHardlyEverUseThisValue){
  x = default expression that does not throw exception
}

The structure can be more complex if the second expression can also be null or may throw an exception and we need a third expression or even more expressions to evaluate in case the former ones failed. This is especially naughty in case of an exception throwing expression because of the many bracketing

Type x; // you cannot use 'var' here
try{
  try {
    x = expression1
  }catch(Exception e){
  try {
    x = expression2
  }catch(Exception e){
  try {
    x = expression3
  }catch(Exception e){
    x = expression4
  }}}}catch(Exception e){
  x = default expression that does not throw exception
}

In the case of null handling, we have Optional. It is not perfect to fix the million dollar problem, which is the name of designing a language having null and also an underestimation, but it makes life a bit better if used well. (And much worse if used in the wrong way, which you are free to say that what I describe in this article is exactly that.)

In the case of null resulting expressions, you can write

var x = Optional.ofNullable(expresssion)
         .orElse(default expression that is nere null);

You can also write

var x = Optional.ofNullable(expresssion1)
.or( () -> Optional.ofNullable(expression2))
.or( () -> Optional.ofNullable(expression3))
.or( () -> Optional.ofNullable(expression4))
...
.orElse(default expression that is nere null);

when you have many alternatives for the value. But you cannot do the same thing in case the expression throws an exception. Or can you?

Exceptional

The library Exceptional (https://github.com/verhas/exceptional)

<groupId>com.javax0</groupId>
<artifactId>exceptional</artifactId>
<version>1.0.0</version>

implements all the methods that are implemented in Optional, one method more and some of the methods a bit differently aiming to be used the same way in case of exceptions as was depicted above for Optional in case of null values.

You can create an Exceptional value using Exceptional.of() or Exceptional.ofNullable(). The important difference is that the argument is not the value but rather a supplier that provides the value. This supplier is not the JDK Supplier because that one cannot throw an exception and that way the whole library would be useless. This supplier has to be Exceptional.ThrowingSupplier which is exactly the same as the JDK Supplier but the method get() may throw an Exception. (Also note that only an Exception and not Throwable which you should only catch as often as you catch a red-hot iron ball using bare hands.)

What you can write in this case is

var x = Exceptional.of(() -> expression) // you CAN use 'var' here
    .orElse(default expression that does not throw exception);

It is shorter and shorter is usually more readable. (Or not? That is why APL is so popular? Or is it? What is APL you ask?)

If you have multiple alternatives you can write

var x = Exceptional.of(() -> expression1) // you CAN use 'var' here
    .or(() -> expression2)
    .or(() -> expression3) // these are also ThrowingSupplier expressions
    .or(() -> expression4)
...
    .orElse(default expression that does not throw exception);

In case some of the suppliers may result null not only throwing an exception there are ofNullable() and orNullable() variants of the methods. (The orNullable() does not exist in Optional but here it makes sense if the whole library does at all.)

If you are familiar with Optional and use the more advanced methods like ifPresent(), ifPresentOrElse(), orElseThrow(), stream(), map(), flatMap(), filter() then it will not be difficult to use Exceptional. Similar methods with the same name exist in the class. The difference again is that in case the argument for the method in Optional is a Function then it is ThrowingFunction in case of Exceptional. Using that possibility you can write code like

    private int getEvenAfterOdd(int i) throws Exception {
        if( i % 2 == 0 ){
            throw new Exception();
        }
        return 1;
    }

    @Test
    @DisplayName("some odd example")
    void testToString() {
        Assertions.assertEquals("1",
                Exceptional.of(() -> getEvenAfterOdd(1))
                        .map(i -> getEvenAfterOdd(i+1))
                        .or( () -> getEvenAfterOdd(1))
                .map(i -> i.toString()).orElse("something")
        );
    }

It is also possible to handle the exceptions in functional expressions like in the following example:

    private int getEvenAfterOdd(int i) throws Exception {
        if (i % 2 == 0) {
            throw new Exception();
        }
        return 1;
    }

    @Test
    void avoidExceptionsForSuppliers() {
        Assertions.assertEquals(14,
                (int) Optional.of(13).map(i ->
                        Exceptional.of(() -> inc(i))
                                .orElse(0)).orElse(15));
    }

Last, but not least you can mimic the ?. operator of Groovy writing

a.b.c.d.e.f

expressions, where all the variables/fields may be null and accessing the next field through them, causes NPE. You can, however, write

var x = Exceptional.ofNullable( () -> a.b.c.d.e.f).orElse(null);

Summary

Remember what I told you about the hammer. Use with care and for the greater good and other BS.

How to generate source code?

In this article, I will talk about the different phases of software development where the source code can be generated programmatically and I will compare the different approaches. I will also describe the architecture and the ideas (the kind of eureka moment) of a specific tool that generates code at a specific phase.

Manually

This is the answer to the question set in the title. If there is a possibility for the purpose you have to generate the code manually. I have already written an article a year ago about code generation and I have not changed my mind.

You should not generate code unless you really have to.

Weird statement, especially when I promote a FOSS tool that is exactly targeting Java code generation. I know, and still, the statement is that you have to write all the code you can manually. Unfortunately, or for the sake of my little tool, there are enough occasions when manual code generation is not an option, or at least automated code generation seems to be a better option.

Why to generate manually

I discussed it already in the referenced article, but here we go again. When the best option is to generate source code then there is something wrong or at least suboptimal in the system.

  • the developer creating the code is sub-par,
  • the programming language is sub-par, or
  • the environment, some framework is sub-par.

Do not feel offended. When I talk about the “sub-par developer” I do not mean You. You are well above the average developer last but not least because you are open and interested in new things proven by the fact that you are reading this article. However, when you write a code you should also consider the average developer Joe or Jane, who will some time in the future maintain your program. And, there is a very specific feature of the average developers: they are not good. They are not bad either, but they, as the name suggests, are average.

Legend of the sub-par developer

It may happen to you what has happened to me a few years back. It went like the following.

Solving a problem I created a mini-framework. Not really a framework, like Spring or Hibernate because a single developer cannot develop anything like that. (It does not stop though some of them trying even in a professional environment, which is contradictory as it is not professional.) You need a team. What I created was a single class that was doing some reflection “magic” converting objects to maps and back. Before that, we had toMap() and fromMap() methods in all classes that needed this functionality. They were created and maintained manually.

Luckily I was not alone. I had a team. They told me to scrap the code I wrote, and keep creating the toMap() and fromMap() manually. The reason is that the code has to be maintained by the developers who come after us. And we do not know them as they are not even selected. They may still study at the university or not even born. We know one thing: they will be average developers and the code I created needs a tad more than average skills. On the other hand, maintaining the handcrafted toMap() and fromMap() methods does not require more than the average skill, though the maintenance is error prone. But that is only a cost issue that needs a bit more investment into QA and is significantly less than hiring ace senior developers.

You can imagine my ambivalent feelings as my brilliant code was refused but with a cushion that praised my ego. I have to say, they were right.

Sub-par framework

Well, many frameworks are in this sense sub-par. Maybe the expression “sub-par” is not really the best. For example, you generate Java code from a WSDL file. Why does the framework generate source code instead of Java byte-code? There is a good reason.

Generating byte code is complex and need special knowledge. It has a cost associated with it. It needs some byte-code generation library like Byte Buddy, more difficult to debug for the programmer using the code and is a bit JVM version dependent. In case the code is generated as Java source code, even if it is for some later version of Java and the project is using some lagging version the chances are better, that the project can some way downgrade the generated code in case this is Java source than if it is byte code.

Sub-par language

Obviously, we are not talking about Java in this case, because Java is the best in the world and there is nothing better. Or is it? If anyone claims about just any programming language that the language is perfect ignore that person. Every language has strength and weaknesses. Java is no different. If you think about the fact that the language was designed more than 20 years ago and according to the development philosophy it kept backward compatibility very strict it simply implies that there should be some areas that are better in other languages.

Think about the equals() and hashCode() methods that are defined in the class Object and can be overridden in any class. There is no much invention overriding any of those. The overridden implementations are fairly standard. In fact, they are so standard that the integrated development environments each support generating code for them. Why should we generate code for them? Why are they not part of the language in some declarative way? Those are questions that should have very good answers because it would really not be a big deal to implement things like that into the language and still they are not. There has to be a good reason, that I am not the best person to write about.

As a summary of this part: if you cannot rely on the manually generated code, you can be sure that something is sub-par. This is not a shame. This is just how our profession generally is. This is how nature goes. There is no ideal solution, we have to live with compromises.

Then the next question is,

When to generate code?

Code generation principally can happen:

  • (BC) before compilation
  • (DC) during compilation
  • (DT) during the test phase
  • (DCL) during class loading
  • (DRT) during run-time

In the following, we will discuss these different cases.

(BC) Before compilation

The conventional phase is before compilation. In that case, the code generator reads some configuration or maybe the source code and generates Java code usually into a specific directory separated from the manual source code.

In this case, the generated source code is not part of the code that gets into the version control system. Code maintenance has to deal with the code generation and it is hardly an option to omit the code generator from the process and go on maintaining the code manually.

The code generator does not have easy access to the Java code structure. If the generated code has to use, extend or supplement in any way the already existing manual code then it has to analyze the Java source. It can be done line by line or using some parser. In either way, this is a task that will be done again by the Java compiler later and also there is a slight chance that the Java compiler and the tool used to parse the code for the code generator may not be 100% compatible.

(DC) during compilation

Java makes it possible to create so-called Annotation Processors that are invoked by the compiler. These can generate code during the compilation phase and the compiler will compile the generated classes. That way the code generation is part of the compilation phase.

The code generators running in this phase cannot access the compiled code, but they can access the compiled structure through an API that the Java compiler provides for the annotation processors.

It is possible to generate new classes, but it is not possible to modify the existing source code.

(DT) during the test phase

First, it seems to be a bit off. Why would anyone want to execute code generation during the test phase? However, the FOSS I try to “sell” here does exactly that, and I will detail the possibility, the advantages and honestly the disadvantages of code generation in this phase.

(DCL) during class loading

It is also possible to modify the code during the class loading. The programs that do this are called Java Agents. They are not real code generators. They work on the byte code level and modify the already compiled code.

(DRT) during run-time

Some code generators work during run-time. Many of these applications generate java bytecode directly and load the code into the running application. It is also possible to generate Java source code, compile the code and load the resulting bytes into the JVM.

Generating Code in Test Phase

This is the phase when and where Java::Geci (Java GEnerate Code Inline) generates the code. To help you understand how one comes to the weird idea to execute code generation during unit test (when it is already too late: the code is already compiled) let me tell you another story. The story is made up, it never happened, but it does not dwarf the explaining power.

We had a code with several data classes each with several fields. We had to create the equals() and hashCode() methods for each of these classes. This, eventually, meant code redundancy. When the class changed, a field was added or deleted then the methods had to be changed as well. Deleting a field was not a problem: the compiler does not compile an equal() or hashCode() method that refers to a non-existent field. On the other hand, the compiler does not mind such a method that does NOT refer to a new existing field.

From time to time we forgot to update these methods and we tried to invent more and more complex and better ways to counteract the error-prone human coding. The weirdest idea was to create an MD5 value of the field names and have this inserted as a comment into the equals() and hashCode() methods. In case there was a change in the fields then a test could check that the value in the source code is different from the one calculated from the names of the fields and then signal an error: unit test fails. We never implemented it.

The even weirder idea, that turned out not that weird and finally led to Java::Geci is actually to create the expected equals() and hashCode() method test during the test from the fields available via reflection and compare it to the one that was already in the code. If they do not match then they have to be regenerated. However, the code at this point is already regenerated. The only issue is that it is in the memory of the JVM and not in the file that contains the source code. Why just signal an error and tell the programmer to regenerate the code? Why does not the test write back the change? After all, we, humans should tell the computer what to do and not the other way around!

And this was the epiphany that led to Java::Geci.

Java::Geci Architecture

Java::Geci generates code in the middle of the compilation, deployment, execution life cycle. Java::Geci is started when the unit tests are running during the build phase.

This means that the manual and previously generated code is already compiled and is available for the code generator via reflection.

Executing code generation during the test phase has another advantage. Any code generation that runs later should generate only code, which is orthogonal to the manual code functionality. What does it mean? It has to be orthogonal in the sense that the generated code should not modify or interference in any way with the existing manually created code that could be discovered by the unit tests. The reason for this is that a code generation happening any later phase is already after the unit test execution and thus there is no possibility to test if the generated code effects in any undesired way the behavior of the code.

Generating code during the test has the possibility to test the code as a whole taking the manual as well as the generated code into consideration. The generated code itself should not be tested, per se, (that is the task of the test of the code generator project) but the behavior of the manual code that the programmers wrote may depend on the generated code and thus the execution of the tests may depend on the generated code.

To ensure that all the tests are OK with the generated code, the compilation and the tests should be executed again in case there was any new code generated. To ensure this the code generation is invoked from a test and the test fails in case new code was generated.

To get this correct the code generation in Java::Geci is usually invoked
from a three-line unit test that has the structure:

Assertions.assertFalse(...generate(...),"code has changed, recompile!");

The call to generate(...) is a chain of method calls configuring the framework and the generators and when executed the framework decides if the generated code is different or not from the already existing code. It writes Java code back to the source code if the code changed but leaves the code intact in case the generated code has not changed.

The method generate() which is the final call in the chain to the code
generation returns true if any code was changed and written back to
the source code. This will fail the test, but if we run the test again
with the already modified sources then the test should run fine.

This structure has some constraints on the generators:

  • Generators should generate exactly the same code if they are executed on the same source and classes. This is usually not a strong requirement, code generators do not tend to generate random source. Some code generators may want to insert timestamps as a comment in the code: they should not.
  • The generated code becomes part of the source and they are not compile-time artifacts. This is usually the case for all code generators that generate code into already existing class sources. Java::Geci can generate separate files but it was designed mainly for inline code generation (hence the name).

  • The generated code has to be saved to the repository and the manual source along with the generated code has to be in a state that does not need further code generation. This ensures that the CI server in the development can work with the original workflow: fetch – compile – test – commit artifacts to the repo. The code generation was already done on the developer machine and the code generator on the CI only ensures that it was really done (or else the test fails).

Note that the fact that the code is generated on a developer machine
does not violate the rule that the build should be machine independent.
In case there is any machine dependency then the code generation would
result in different code on the CI server and thus the build will break.

Code Generation API

The code generator applications should be simple. The framework has to do all the tasks that are the same for most of the code generators, and should provide support or else what is the duty of the framework?

Java::Geci does many things for the code generators:

  • it handles the configuration of the file sets to find the source files
  • scans the source directories and finds the source code files
  • reads the files and if the files are Java sources then it helps to find the class that corresponds to the source code
  • supports reflection calling to help deterministic code generation
  • unified configuration handling
  • Java source code generation in different ways
  • modifies the source files only when changed and write back changes
  • provide fully functional sample code generators. One of those is a full-fledged Fluent API generator that alone could be a whole project.
  • supports Jamal templating and code generation.

Summary

Reading this article you got a picture of how Java::Geci works. You can actually start using it visiting the GitHub Home Page of Java::Geci. I will also deliver a talk about this topic in Mainz at the JAX conference Wednesday, May 8, 2019. 18:15 – 19:15

In the coming weeks, I plan to write more articles about the design considerations and actual solutions I followed in Java::Geci.

You are encouraged to contact me, for the code, create tickets follow on Twitter, Linked-in whatnot. It is fun.

Get rid of pom XML… almost

Introduction

POM files are XML formatted files that declaratively describe the build structure of a Java project to be built using Maven. Maintaining the POM XML files of large Java projects is many times cumbersome. XML is verbose and also the structure of the POM requires the maintenance of redundant information. The naming of the artifacts many times is redundant repeating some part of the name in the groupId and in the artifactId. The version of the project should appear in many files in case of a multi-module project. Some of the repetitions can be reduced using properties defined in the parent pom, but you still have to define the parent pom version in each module pom, because you refer to a POM by the artifact coordinates and not just referring to it as “the pom that is there in the parent directory”. The parameters of the dependencies and the plugins can be configured in the parent POM in the pluginManagement and dependency management but you still can not get rid of the list of the plugins and dependencies in each and every module POM though they are usually just the same.

You may argue with me because it is also the matter of taste, but for me, POM files in their XML format are just too redundant and hard to read. Maybe I am not meticulous enough but many times I miss some errors in my POM files and have a hard time to fix them.

There are some technologies to support other formats, but they are not widely used. One such approach to get rid of the XML is Poyglot Maven. However, if you look on that Github page at the very first example, which is Ruby format POM you can still see a lot of redundant information, repetitions. This is because Polyglot Maven plugs-into Maven itself and replaces only the XML format to something different but does not help on the redundancy of the POM structure itself.

In this article, I will describe an approach that I found much better than any other solution, where the POM files remain XML for the build process, thus there is no need for any new plugin or change of the build process, but these pom.xml files are generated using the Jamal macro language from the pom.xml.jam file and some extra macro files that are shared by the modules.

Jamal

The idea is to use a text-based macro language to generate the XML files from some source file that contains the same information is a reduced format. This is some kind of programming. The macro description is a program that outputs the verbose XML format. When the macro language is powerful enough the source code can be descriptive enough and not too verbose. My choice was Jamal. To be honest, one of the reasons to select Jamal was that it is a macro language that I developed almost 20 years ago using Perl and a half year ago I reimplemented it in Java.

The language itself is very simple. Text and macros are mixed together and the output is the text and the result of the macros. The macros start with the { character or any other string that is configured and end by the corresponding } character or by the string that was configured to be the ending string. Macros can be nested and there is fine control what order the nested macros should be evaluated. There are user-defined and built-in macros. One of the built-in macros is define that is used to define user-defined macros.

An example talks better. Let’s have a look at the following test.txt.jam file.

{@define GAV(_groupId,_artifactId,_version)=
    {#if |_groupId|<groupId>_groupId</groupId>}
    {#if |_artifactId|<artifactId>_artifactId</artifactId>}
    {#if |_version|<version>_version</version>}
}

{GAV :com.javax0.geci:javageci-parent:1.1.2-SNAPSHOT}

processing it with Jamal we will get


    <groupId>com.javax0.geci</groupId>
    <artifactId>javageci-parent</artifactId>
    <version>1.1.2-SNAPSHOT</version>

I deleted the empty lines manually for typesetting reasons though, but you get a general idea. GAV is defined using the built-in macro define. It has three arguments named _groupId,_artifactId and _version. When the macro is used the format argument names in the body of the macro are replaced with the actual values and replace the user-defined macro in the text. The text of the define built-in macro itself is an empty string. There is a special meaning when to use @ and when to use # in front of the built-in macros, but in this article, I cannot get into that level of detail.

The if macros also make it possible to omit groupId, artifactId or version, thus

{GAV :com.javax0.geci:javageci-parent:}

also works and will generate

    <groupId>com.javax0.geci</groupId>
    <artifactId>javageci-parent</artifactId>

If you feel that still there is a lot of redundancy in the definition of the macros: you are right. This is the simple approach defining GAV, but you can go to the extreme:

{#define GAV(_groupId,_artifactId,_version)=
    {@for z in (groupId,artifactId,version)=
        {#if |_z|<z>_z</z>}
    }
}{GAV :com.javax0.geci:javageci-parent:}

Be warned that this needs an insane level of understanding of macro evaluation order, but as an example, it shows the power. More information on Jamal https://github.com/verhas/jamal

Let’s get back to the original topic: how Jamal can be used to maintain POM files.

Cooking pom to jam

There can be many ways, which each may be just good. Here I describe the first approach I used for the Java::Geci project. I create a pom.jim file (jim stands for Jamal imported or included files). This contains the definitions of macros, like GAV, dependencies, dependency and many others. You can download this file from the Java::Geci source code repo: https://github.com/verhas/javageci The pom.jim file can be the same for all projects, there is no any project specific in it. There is also a version.jim file that contains the macro that defines at one single place the project version, the version of Java I use in the project and the groupId for the project. When I bump the release number from -SNAPSHOT to the next release or from the release to the next -SNAPSHOT this is the only place where I need to change it and the macro can be used to refer to the project version in the top level POM? but also in the module POMs referring to the parent.

In every directory, where there should a pom.xml file I create a pom.xml.jam file. This file imports the pom.jim file, so the macros defined there can be used in it. As an example the Java::Geci javageci-engine module pom.xml.jam file is the following:

{@import ../pom.jim}
{project |jar|
    {GAV ::javageci-engine:{VERSION}}
    {parent :javageci-parent}
    {name|javageci engine}
    {description|Javageci macro library execution engine}

    {@include ../plugins.jim}

    {dependencies#
        {@for MODULE in (api,tools,core)=
            {dependency :com.javax0.geci:javageci-MODULE:}}
        {@for MODULE in (api,engine)=
            {dependency :org.junit.jupiter:junit-jupiter-MODULE:}}
    }
}

I think that this is fairly readable, at least for me it is more readable than the original pom.xml was:

<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <packaging>jar</packaging>
    <artifactId>javageci-engine</artifactId>
    <version>1.1.1-SNAPSHOT</version>
    <parent>
        <groupId>com.javax0.geci</groupId>
        <artifactId>javageci-parent</artifactId>
        <version>1.1.1-SNAPSHOT</version>
    </parent>
    <name>javageci engine</name>
    <description>Javageci macro library execution engine</description>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-source-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-javadoc-plugin</artifactId>
            </plugin>
        </plugins>
    </build>
    <dependencies>
        <dependency>
            <groupId>com.javax0.geci</groupId>
            <artifactId>javageci-api</artifactId>
        </dependency>
        <dependency>
            <groupId>com.javax0.geci</groupId>
            <artifactId>javageci-tools</artifactId>
        </dependency>
        <dependency>
            <groupId>com.javax0.geci</groupId>
            <artifactId>javageci-core</artifactId>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-engine</artifactId>
        </dependency>
    </dependencies>
</project>

To start Jamal I can use the Jamal Maven plugin. To do that the easiest way is to have a genpom.xml POM file in the root directory, with the content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.javax0.jamal</groupId>
    <artifactId>pom.xml_files</artifactId>
    <version>out_of_pom.xml.jam_files</version>
    <build>
        <plugins>
            <plugin>
                <groupId>com.javax0.jamal</groupId>
                <artifactId>jamal-maven-plugin</artifactId>
                <version>1.0.2</version>
                <executions>
                    <execution>
                        <id>execution</id>
                        <phase>clean</phase>
                        <goals>
                            <goal>jamal</goal>
                        </goals>
                        <configuration>
                            <transformFrom>\.jam$</transformFrom>
                            <transformTo></transformTo>
                            <filePattern>.*pom\.xml\.jam$</filePattern>
                            <exclude>target|\.iml$|\.java$|\.xml$</exclude>
                            <sourceDirectory>.</sourceDirectory>
                            <targetDirectory>.</targetDirectory>
                            <macroOpen>{</macroOpen>
                            <macroClose>}</macroClose>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

Having this I can start Maven with the command line mvn -f genpom.xml clear. This not only creates all the POM files but also clears the previous compilation result of the project, which is probably a good idea when the POM file changes. It can also be executed when there is no pom.xml yet in the directory or when the file is not valid due to some bug you may have in the jam cooked POM file. Unfortunately, all recursivity has to end somewhere and it is not feasible, though possible to maintain the genpom.xml as a jam cooked POM file.

Summary

What I described is one approach to use a macro language as a source instead of raw editing the pom.xml file. The advantage is the shorter and simpler project definition. The disadvantage is the extra POM generation step, which is manual and not part of the build process. You also lose the possibility to use the Maven release plugin directly since that plugin modifies the POM file. I myself always had problems to use that plugin, but it is probably my error and not that of the plugin. Also, you have to learn a bit Jamal, but that may also be an advantage if you happen to like it. In short: you can give it a try if you fancy. Starting is easy since the tool (Jamal) is published in the central repo, the source and the documentation is on Github, thus all you need is to craft the genpom.xml file, cook some jam and start the plugin.

POM files are not the only source files that can be served with jam. I can easily imagine the use of Jamal macros in the product documentation. All you need is creating a documentationfile.md.jam file as a source file and modify the main POM to run Jamal during the build process converting the .md.jam to the resulting macro processed markdown document. You can also set up a separate POM just like we did in this article in case you want to keep the execution of the conversion strictly manual. You may even have java.jam files in case you want to have a preprocessor for your Java files, but I beg you not to do that. I do not want to burn in eternal flames in hell for giving you Jamal. It is not for that purpose.

There are many other possible uses of Jamal. It is a powerful macro language that is easy to embed into applications and also easy to extend with macros written in Java. Java::Geci also has a 1.0 version module that supports Jamal to ease code generation still lacking some built-in macros that are planned to make it possible to reach out to the Java code structure via reflections. I am also thinking about to develop some simple macros to read Java source files and to include into documentation. When I have some result in those I will write about.

If you have any idea what else this technology could be used for, do not hesitate to contact me.

var and language design

What is var in Java

The var predefined type introduced in Java 10 lets you declared local variables without specifying the type of the variable when you assign a value to the variable. When you assign a value to a variable the type of the expression already defines the type of the variable, thus there is no reason to type the type on the left side of the line again. It is especially good when you have some complex long types with a lot of generics, for example

HashMap<String,TreeMap<Integer,String> myMap = mapGenerator();

Generic types you could already inherit in prior Java versions but now you can simply type

var myMap = mapGenerator();

This is simpler, and most of the times more readable than the previous version. The aim of the var is mainly readability. It is important to understand that the variables declared this way will have a type and the introduction of this new predefined type (not a keyword) does not render Java to be a dynamic language. There are a few things that you can do this way that you could not before or you could do it only in a much more verbose way. For example, when you assign an instance of an anonymous class to a variable you can invoke the declared methods in the class through the var declared variables. For example:

var m = new Object{ void z(){} }
m.z();

you can invoke the method z() but the code

Object m = new Object{ void z(){} }
m.z();

does not compile. You can do that because anonymous classes actually have a name at their birth, they just lose it when the instance gets assigned to a variable declared to be the type of Object.

There is a little shady part of the var keyword. This way we violate the general rule to instantiate the concrete class but declare the variable to be the interface. This is a general abstraction rule that we usually follow in Java most of the times. When I create a method that returns a HashMap I usually declare the return value to be a Map. That is because HashMap is the implementation of the return value and as such is none of the business of the caller. What I declare in the return type is that I return something that implements the Map interface. The way I do it is my own duty. Similarly, we declare usually the fields in the classes to be of some interface type if possible. The same rule should also be followed by local variables. A few times it helped me a lot when I declared a local variable to be Set but the actual value was TreeSet and then typing the code I faced some error. Then I realized that I was using some of the features that are not Set but SortedSet. It helped me to realize that sorted-ness is important in the special case and that it will also be important for the caller and thus I had to change the return type of the method also to be SortedSet. Note that SortedSet in this example is still an interface and not the implementation class.

With the use of var we lose that and we gain a somewhat simpler source code. It is a trade-off as always. In case of the local variables the use of the variable is close in terms of source code lines to the declaration, therefore the developer can see in a glimpse what is what and what is happening, therefore the “bad” side of this tradeoff is acceptable. The same tradeoff in case of method return values or fields is not acceptable. The use of these class members can be in different classes, different modules. It is not only difficult but it may also be impossible to see all the uses of these values, therefore here we remain in the good old way: declare the type.

The future of var (just ideas)

There are cases when you cannot use var even for local variables. Many times we have the following coding pattern:

final var variable; // this does not work in Java 11
if ( some condition ) {
    variable = expression_1
    // do something here
} else {
    variable = expression_2
    // do something here
}

Here we can not use var because there is no expression assigned to the variable on the declaration itself. The compiler, however, could be extended. From now on what I talk about is not Java as it is now. It is what I imagine how it can be in some future version.

If the structure is simple and the “do something here” is empty, then the structure can be transformed into a ternary operator:

final var variable = some condition ? ( expression_1 ) : (expression_2)

In this case, we can use the var declaration even if we use an old version of Java, e.g.: Java 11. However, be careful!

var h = true ? 1L : 3.3;

What will be the actual type of the variable h in this example? Number? The ternary operator has complex and special type coercion rules, which usually do not cause any issue because the two expressions are close to each other. If we let the structure described above use a similar type coercion then the expressions are not that close to each other. As for now, the distance is far enough for Java not to allow the use of the var type definition. My personal opinion is that the var declaration should be extended sometime in the future to allow the above structure but only in the case when the two (or more in case of more complex structure) expressions have exactly the same type. Otherwise, we may end up having an expression that results in an int, another that results in a String and then what will the type of the variable be? Do not peek at the picture before answering!

(This great example was given by Nicolai Parlog.)

I can also imagine that in the future we will have something that is similar to Scala val, which is final var in Java 11. I do not like the var vs. val naming though. It is extremely sexy and geekish, but very easy to mistake one for the other. However, if we have a local variable declaration that starts with the final keyword then why do we need the var keyword after that?

Finally, I truly believe that var is a great tool in Java 11, but I also expect that it’s role will be extended in the future.

Implementing Basic REST APIs with JAX-RS

This is a guest article promoted by PACKT, the publisher I work with to get my books to the readers.

Learn how to implement basic REST APIs with JAX-RS in this article by Mario-Leander Reimer, a chief technologist for QAware GmbH and a senior Java developer and architect with several years of experience in designing complex and large-scale distributed system architectures.

This article will take a look at how to implement a REST resource using basic JAX-RS annotations. You’ll implement a REST API to get a list of books so that you’ll be able to create new books, get a book by ISBN, update books, and delete a book. The complete code for this book is also available at https://github.com/PacktPublishing/Building-RESTful-Web-Services-with-Java-EE-8.

Conceptual view of this section

You’ll create a basic project skeleton and prepare a simple class called BookResource and use this to implement the CRUD REST API for your books. First, you need to annotate your class using proper annotations. Use the @Path annotation to specify the path for your books API, which is "books" and make a @RequestScoped CDI bean.

Now, to implement your business logic, you can use another CDI bean. So, you need to get it injected into this one. This other CDI bean is called bookshelf, and you’ll use the CDI @Inject annotation to get a reference to your bookshelf. Next, implement a method to get hold of a list of all books.

What you see here is that you have a books() method, which is @GET annotated, and it produces MediaType.APPLICATION_JSON and returns a JAX-RS response. You can see that you construct a response of ok, which is HTTP 200; use bookshelf.findAll() as the body, which is a collection of books and then build the response. The BookResource.java file should look as follows:

@Path("books")
@RequestScoped
public class BookResource {

@Inject
private Bookshelf bookshelf;

@GET
@Produces(MediaType.APPLICATION_JSON)
public Response books() {
return Response.ok(bookshelf.findAll()).build();
}

Next, implement a GET message to get a specific book. To do this, you have a @GET annotated method, but this time you have the @Path annotation with the "/{isbn}" parameter. To get hold of the parameter called isbn, use the @PathParam annotation to pass the value. Use bookshelf to find your book by ISBN and return the book found using the HTTP status code 200 that is, ok:

@GET
@Path("/{isbn}")
public Response get(@PathParam("isbn") String isbn) {
Book book = bookshelf.findByISBN(isbn);
return Response.ok(book).build();
}

In order to create something, it’s a convention to use HTTP POST as a method. You consume the application JSON and expect the JSON structure of a book. You call bookshelf.create with the book parameter and then use UriBuilder to construct the URI for the just-created book; this is also a convention. Return this URI using Response.created, which matches the HTTP status code 201, and call build() to build the final response:

@POST
@Consumes(MediaType.APPLICATION_JSON)
public Response create(Book book) {
if (bookshelf.exists(book.getIsbn())) {
return Response.status(Response.Status.CONFLICT).build();
}

bookshelf.create(book);
URI location = UriBuilder.fromResource(BookResource.class)
.path("/{isbn}")
.resolveTemplate("isbn", book.getIsbn())
.build();
return Response.created(location).build();
}

You can implement the update method for an existing book. Again it’s a convention to use the HTTP method PUT. Update this by putting in a specific location. Use the @Path parameter with a value of "/{isbn}". Give a reference to this isbn here in the update() method parameter, and you have the JSON structure of your book ready. Use bookshelf.update to update the book and in the end, return the status code ok:

@PUT
@Path("/{isbn}")
public Response update(@PathParam("isbn") String isbn, Book book) {
bookshelf.update(isbn, book);
return Response.ok().build();
}

Finally, implement the delete message and use the HTTP method DELETE on the path of an identified ISBN. Using the @PathParam annotation here, call bookshelf.delete() and return ok if everything went well:

@DELETE
@Path("/{isbn}")
public Response delete(@PathParam("isbn") String isbn) {
bookshelf.delete(isbn);
return Response.ok().build();
}

This is the CRUD implementation for your book resource. Use a Docker container and the Payara Server micro edition to run everything. Copy your WAR file to the deployments directory and then you’re up and running:

FROM payara/micro:5-SNAPSHOT

COPY target/library-service.war /opt/payara/deployments

See if everything’s running on your REST client (Postman). First, get a list of books. As you can see here, this works as expected:

If you want to create a new book, issue the POST and create new book request, and you’ll see a status code of OK 200. Get the new book using GET new book; this is the book you just created, as shown in the following screenshot:

Update the book using Update new book, and you’ll get a status code of OK 200. You can get the updated book using GE new bookT. Get the updated title, as shown in the following screenshot:

Finally, you can delete the book. When you get the list of books, your newly created book is not a part of it anymore.

If you found this article interesting, you can explore Building RESTful Web Services with Java EE 8 to learn the fundamentals of Java EE 8 APIs to build effective web services. Building RESTful Web Services with Java EE 8 also guides you in leveraging the power of asynchronous APIs on the server and client side, and you will learn to use server-sent events (SSEs) for push communication.