Monthly Archives: January 2018

Comparing files in Java

I am creating a series of video tutorials for PACKT about network programming in Java. There is a whole section about Java NIO. One sample program is to copy a file via raw socket connection from a client to a server. The client reads the file from the disk, and the server saves the bytes as they arrive, to disk. Because this is a demo, the server and the client are running on the same machine and the file is copied from one directory to the exact same directory but a different name. The proof of the pudding is eating it: the files have to be compared.

The file I wanted to copy was created to contain random bytes. Transferring only text information can leave sometimes some tricky bug lurking in the code. The random file was created using the simple Java class:

package packt.java9.network.niodemo;

import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Random;

public class SampleMaker {
    public static void main(String[] args) throws IOException {
        byte[] buffer = new byte[1024 * 1024 * 10];
        try (FileOutputStream fos = new FileOutputStream("sample.txt")) {
            Random random = new Random();
            for (int i = 0; i < 16; i++) {
                random.nextBytes(buffer);
                fos.write(buffer);
            }
        }
    }
}

Using IntelliJ comparing files is fairly easy, but since the files are binary and large this approach is not really optimal. I decided to write a short program that will not only signal that the files are different but also where the difference is. The code is extremely simple:

package packt.java9.network.niodemo;

import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.IOException;

public class SampleCompare {
    public static void main(String[] args) throws IOException {
        long start = System.nanoTime();
        BufferedInputStream fis1 = new BufferedInputStream(new FileInputStream("sample.txt"));
        BufferedInputStream fis2 = new BufferedInputStream(new FileInputStream("sample-copy.txt"));
        int b1 = 0, b2 = 0, pos = 1;
        while (b1 != -1 && b2 != -1) {
            if (b1 != b2) {
                System.out.println("Files differ at position " + pos);
            }
            pos++;
            b1 = fis1.read();
            b2 = fis2.read();
        }
        if (b1 != b2) {
            System.out.println("Files have different length");
        } else {
            System.out.println("Files are identical, you can delete one of them.");
        }
        fis1.close();
        fis2.close();
        long end = System.nanoTime();
        System.out.print("Execution time: " + (end - start)/1000000 + "ms");
    }
}

The running time comparing the two 160MB files is around 6 seconds on my SSD equipped Mac Book and it does not improve significantly if I specify a large, say 10MB buffer as the second argument to the constructor of BufferedInputStream. (On the other hand, if we do not use the BufferedInputStream then the time is approximately ten times more.) This is acceptable, but if I simply issue a diff sample.txt sample-copy.txt from the command line, then the response is significantly faster, and not 6 seconds. It can be many things, like Java startup time, code interpretation at the start of the while loop, till the JIT compiler thinks it is time to start to work. My hunch is, however, that the code spends most of the time reading the file into the memory. Reading the bytes to the buffer is a complex process. It involves the operating system, the device drivers, the JVM implementation and they move bytes from one place to the other and finally we only compare the bytes, nothing else. It can be done in a simpler way. We can ask the operating system to do it for us and skip most of the Java runtime activities, file buffers, and other glitters.

We can ask the operating system to read the file to memory and then just fetch the bytes one by one from where they are. We do not need a buffer, which belongs to a Java object and consumes heap space. We can use memory mapped files. After all, memory mapped files use Java NIO and that is exactly the topic of the part of the tutorial videos that are currently in the making.

Memory mapped files are read into the memory by the operating system and the bytes are available to the Java program. The memory is allocated by the operating system and it does not consume the heap memory. If the Java code modifies the content of the mapped memory then the operating system writes the change to the disk in an optimized way, when it thinks it is due. This, however, does not mean that the data is lost if the JVM crashes. When the Java code modifies the memory mapped file memory then it modifies a memory that belongs to the operating system and is available and is valid after the JVM stopped. There is no guarantee and 100% protection against power outage and hardware crash, but that is very low level. If anyone is afraid of those then the protection should be on the hardware level that Java has nothing to do anyway. With memory mapped files we can be sure that the data is saved into the disk with certain, very high probability that can only be increased by failure tolerant hardware, clusters, uninterruptible power supplies and so on. These are not Java. If you really have to do something from Java to have the data written to disk then you can call the MappedByteBuffer.force() method that asks the operating system to write the changes to disk. Calling this too often and unnecessarily may hinder the performance though. (Simple because it writes the data to disk and returns only when the operating system says that the data was written.)

Reading and writing data using memory mapped files is usually much faster in case of large files. To have the appropriate performance the machine should have significant memory, otherwise, only part of the file is kept in memory and then the page faults increase. One of the good things is that if the same file is mapped into the memory by two or more different processes then the same memory area is used. That way processes can even communicate with each other.

The comparing application using memory mapped files is the following:

package packt.java9.network.niodemo;

import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;

public class MapCompare {
    public static void main(String[] args) throws IOException {
        long start = System.nanoTime();
        FileChannel ch1 = new RandomAccessFile("sample.txt", "r").getChannel();
        FileChannel ch2 = new RandomAccessFile("sample-copy.txt", "r").getChannel();
        if (ch1.size() != ch2.size()) {
            System.out.println("Files have different length");
            return;
        }
        long size = ch1.size();
        ByteBuffer m1 = ch1.map(FileChannel.MapMode.READ_ONLY, 0L, size);
        ByteBuffer m2 = ch2.map(FileChannel.MapMode.READ_ONLY, 0L, size);
        for (int pos = 0; pos < size; pos++) {
            if (m1.get(pos) != m2.get(pos)) {
                System.out.println("Files differ at position " + pos);
                return;
            }
        }
        System.out.println("Files are identical, you can delete one of them.");
        long end = System.nanoTime();
        System.out.print("Execution time: " + (end - start) / 1000000 + "ms");
    }
}

To memory map the files we have to open them first using the RandomAccessFile class and ask for the channel from that object. The channel can be used to create a MappedByteBuffer, which is the representation of the memory area where the file content is loaded. The method map in the example maps the file in read-only mode, from the start of the file to the end of the file. We try to map the whole file. This works only if the file is not larger than 2GB. The start position is long but the size of the area to be mapped is limited by the size of an Integer.

Generally this it… Oh yes, the running time comparing the 160MB random content files is around 1sec.

UPDATE:

https://twitter.com/snazy pointed out that the part of the code

        for (int pos = 0; pos < size; pos++) {
            if (m1.get(pos) != m2.get(pos)) {
                System.out.println("Files differ at position " + pos);
                return;
            }
        }

can be replaced using the built-in ByteBuffer::mismatch method. The code is simpler, it does exactly what the example code is aiming and it is probably faster.

Advertisement

Rating Articles

My last article was commented as “Weakest article in a long time, and it shows a prime example of typical inch pincher stuff certain people still like to make a fuzz about.”

To ease this type of feedback for you I switched on the rating functionality. (I do not know why I did not do that before.)

You can simply click on the stars at the top of the article to express you like or dislike an article. (It is not shown on the opening page, you have to click on the title of the article to get to the article’s own page.) This will help me to write better articles and it will also help the other readers to skip articles when they are not that good.

Format the code

PACKT recently published the book Mastering Java 9 that I co-authored. The author who started to write the book could not finish, and PACKT had to find people who agreed to write some of the chapters. I previously successfully finished Programming Java 9 by Example and PACKT asked me if I could help. Finally, we wrote the book with Dr. Edward Lavieri.

To my surprise, he used C# bracket placing in the book. That is, the { character is placed not on the end of the line but rather at the start of the next line.

if( myCondition )
{
  do something;
}

This is against the usual Java coding convention.

When I started programming C in 1984 I started to use this bracket placement. That time Internet was not reachable and there were no tools to share opinions in such a wide audience like now. It was also not clear how much formatting means when we are coding. What is more, it was not even clear how much readability is important.

Later I learned that the convention in case of C programming usually is to put the { character at the end of the line.

if( myCondition ){
  do something;
  }

In case of C, it is not always the case. Some programmer teams use the first bracket placement, while others use the later.

In case of Java, the C# style bracket placement is almost extinct. Perhaps it would be interesting to see a statistics over the sources available on GitHub to see how big percent of the Java code uses this or that.

Why is it interesting? Because

You should not use C# formatting when you program Java code!

There are exceptions as always. For example, the company you work for insists on the other coding style. In that case, the company made the bad decision and although it may be okay or even great working for this company, this is certainly a company smell. (See code smell.)

Why should not you use the other style?

Because of readability. Readability is subjective and still, in this case, I dare say that the Java style bracket placement is more readable. Hold your horses before ranting, I will explain.

Что было раньше, курица или яйцо?

For most of you, the above sentence is not readable. I can read it because I grew up in eastern Europe where learning Russian was mandatory. It is also readable for most of the Russian people. They can not only read it, but they can even understand it. For them, it is just as readable as the English sentence

What was sooner, the chicken or the egg?

for us.

Readablility depends on what we got used to. Java programmers got used to

                      {
  }

bracket placement. C# programmers use the other style. If we see a code that is formatted differently you may oversee some aspect of the code that you would not skip otherwise. The difference is subtle but still it is there. When you hire Java developers you are more likely to find good Java developers for a reasonable price who use and who are accustomed to the industry standard than one who is accustomed to the C# style.

You can find here and there some who are also fluent in C# as well and can read Cyrillic… ops… C# “characters” but it is less common than pure Java developers.

The bottom line is that the TCO of the code will be lower during the lifetime of code development and maintenance if you follow the industry standards. It is that simple.

P.S.: Buy the books!

Raid, backup and archive

This is a short tutorial about the similarities and the differences of redundant storage, backup, and archive functionality. I felt a need to create this short introduction because I realized that many IT professionals do not know the difference between these operations and many times mix them or using the wrong approach for some purpose.

I personally once was the witness of a backup at a Hungarian bank, which was stored on a partition of a raid set disk, which also held the operational data. Raid controller failure happened. Backup was unusable. Technically it was not a backup. A Digital Equipment Corp. engineer was restoring the allocation bits of the raid set for two weeks to restore account data. Although neither the bank, which shall not be named, nor Digital do not exist anymore I am more than convinced that similar backups still do.

What these methods are

Redundant storage, backup, and archive copy operational data. They do that aiming more stability in operation. The copied data is stored in a redundant way and in case there is some event that needs data deleted or corrupted previously the copied version is still available. The differences between these data redundancy increasing strategies are

  • (NEED) the type of event that creates the need for the deleted data
  • (CAUSE) the type of event that causes the deletion of the data
  • (DISCOVERY) how the data loss or need is recognized
  • (HOW) how the actual copy is created and stored

Redundant storage

Redundant storage copies the data online and all the time. (HOW) When there is some change in the data the redundant information is created in some storage media as soon as the hardware and software make it possible. The action of copy is not batched. It is not waiting for a bunch of data to be copied together. It is copied as soon as possible.

The actual implementation is usually some RAID configuration. A RAID configuration two or more same-size disks parallel. In case of two disks, anything written on one is written to the other at the same time. When reading one of the disks is used, which makes reading twice as fast regarding the data transfer assuming that the data transfer bus between the disk and the computer is fast enough. Seek time in case of rotating (non-SSD) disks is not improved.

When there are three or more disks the writing is a bit different. In this situation whenever a bit is changed on one disk then the bit is also changed on the last disk of the RAID set. The RAID controller keeps the bits of the last disk of the set to be the XOR value of the same bits on the other disks. That way the data is “partially copied”.

In case of a hardware failure, the RAID solutions usually allow the faulty disk to be replaced without switching off the disk system. The controller will automatically reconstruct the missing data.

(NEED) Redundant storage keeps the data available during normal operation and prevents data loss in case of (CAUSE) hardware failure. All the data is copied all the time and in case there is a failure the data recovery causes a few milliseconds in data access delay. Data redundancy recovery may be longer in the range o few minutes or hours, but the data is available unless there are multiple failures.

(DISCOVERY) The data loss is automatically detected because the redundancy is checked upon every read.

Backup

(HOW) Backup copies data usually to offline media. The copy is started at regular intervals, like every hour, day or week. When a backup is executed files that changed since the last backup are copied to the backup media. Backup can cover the application data or can cover the whole operating system. Many times operating system is not backed up. When there is a need to restore the information OS is installed fresh from installation media and only the application files are restored from the backup storage. This may require smaller backup storage, faster backup and restore execution.

There are different techniques called full, partial and differential backups. Creating backups without purging old data would infinitely grow the size of the backup media. This would not only cost ever increasing money buying the media but the burden to catalog and keep the old media would also mean a huge operational cost burden. To optimize the costs old backups are deleted with special strategy. As an example, a strategy can require to create a backup every day and delete the backups that are older than one week except those that were created on Monday. Backups older than a month can also be deleted except those that were created on the first Monday of the month and similarly backups older than a year may be deleted except the backup of January and June.

(NEED) The data stored on the backup media is needed if it is discovered that some data was deleted. (CAUSE) The reason for the deletion may be human error or sabotage. A user of the system mistyped the name of a record to be deleted or thought that the data is not needed anymore and later it is realized that it was a mistake. Sabotage is a deliberate action when somebody having access to the system deletes or alters data as a wrongdoing. In either case, the data is ruined by human interaction. It may also be possible that the data is ruined by disaster (flood, fire, earthquake) or some hardware error that causes much more severe damage than a simple disk error.

The backup media itself can also be the target of the sabotage. Disaster can also damage backup media. For this reason, backup is usually stored offline disconnected from the main operating system and many times the media is transferred to a different location.

When data needs to be restored the backup media has to be copied back to the operational components to restore the information that was deleted or altered. The restore process needs to connect the backup media, or a copy of the backup media to the operational components and copy the data back. The connecting is usually a manual process because anything automated can be the target for a sabotage. Because of manual nature of the process restoring a backup is usually a long time. It may be a few minutes, hours or days. Usually the older the backup the more time is needed to get back the operational data.

Archive

(HOW) The creation of an archive is very similar to the creation of a backup. We copy some of the data to some offline media and we store it for a long time. The archive copy is usually done on data that was not yet archived. Archive this way is kind of incremental usually. (CAUSE) Archive stores data, which is deleted from the system deliberately by the normal operational processes, because it is not needed by the operation. The archive is not aiming to provide a backup source for data that is found to be deleted accidentally. The data stored in the archive is never needed for normal operation. (DISCOVERY/NEED) The archive data is needed for extraordinary operation.

For example, the mobile company does not need the cell information of individual phones for a long time. It is an operation data stored in the HLR and VLR database and this information is not even backed up usually. In case there is data loss getting the actual information is faster gathering it from the GSM network than restoring from a backup being probably fairly outdated (mobile phones move in the meantime). On May 9, 2002, some robbers killed 8 people in the small Hungarian town Mor. A few years later when the investigation got to the point to examine the mobile phone movements in the area the data was not available as operational data but it was available in the archives. Analysing GSM cell data to support the operation of homicide investigation is not a normal operation of a telecom company.

You archive data that you are obligated to store and archive by law, you suspect that you may need for some unforeseeable future purpose. Records that describe the business level operations and transactions are archived usually.

Comparison


As you can see from the above one of the method cannot replace the other. They supplement each other and if you do not implement one of them then you can expect that the operation will be sub-par.

The example in the intro explains clearly why redundant storage does not eliminate the need for a backup. Similarly archiving cannot be replaced by an otherwise proper backup solution. The error, in this case, will not face you so harsh and evident because of the long-term nature of the archive. Nevertheless, an archive is not the same as backup.

In some cases, I have seen the use of archive as the source of data backup. This is a forgivable sin only when the data loss has already happened and the archive still has the data you need. On the other hand, the archive does not contain all the operational data, only those that have long-term business relevance.

Summary

This is a short introduction to redundant storage, backup, and archive. Do not think that understanding what is written here makes you an expert in any of these topics. Each of the topics is a special expert area with tons of literature to learn and loads of exercises to practice and ace. On the other hand, now you should understand the basic roles of these methods, what they are good for and what they are not good for, as well as you should know the most important differences to avoid the mistakes that others have already committed.

There is no need to repeat old mistakes. Commit new ones!

Java 9 Module Services

Wiring and Finding

Java has a ServiceLoader class for long time. It was introduced in 1.6 but a similar technology was in use since around Java 1.2. Some software components used it, but the use was not widespread. It can be used to modularize the application (even more) and to provide a mean to extend an application using some kind of plug-ins that the application does not depend on compile time. Also, the configuration of these services is very simple: just put it on the class/module path. We will see the details.

The service loader can locate implementations of some interfaces. In EE environment there are other methods to configure implementations. In the non-EE environment, Spring became ubiquitous, which has a similar, though not the exact same solution to a similar, but not an exactly same problem. Inversion of Control (IoC) and Dependency Injections (DI) provided by Spring are the solution to the configuration of the wiring of the different components and are the industry best practice how to separate the wiring description/code from the actual implementation of the functionalities that the classes have to perform.

As a matter of fact, Spring also supports the use of the service loader so you can wire an implementation located and instantiated by the service loader. You can find a short and nicely written article about that here.

ServiceLoader is more about how to find the implementation before we could inject it into the components that need it. Junior programmers sometimes mistakenly mix the two and it is not without reason: they are strongly related.

Perhaps because of this most of the applications, at least those that I have seen, do not separate the wiring and the finding of the implementation. These applications usually use Spring configuration for both finding and wiring and this is just OK. Although this is a simplification, we should live with and be happy with it. We should not separate the two functions just because we can. Most of the applications do not need to separate these. They are neatly sitting on a simple line of the XML configuration of a Spring application.

We should program on a level of abstraction that is needed but never more abstract.

Yes, this sentence is a paraphrase of a saying that is attributed to Einstein. If you think about it you can also realize that this statement is nothing more than the principle KISS (keep it simple and stupid). The code, not you.

ServiceLoader finds the implementation of a certain class. Not all the implementations that may be on the classpath. It finds only those that are “advertised”. (I will tell later what “advertised” means.) A Java program cannot traverse through all the classes that are on the classpath, or can they?

Browsing the classpath

This section is a little detour, but it is important to understand why ServiceLoader works the way it does, even before we discuss how it does.

A Java code cannot query the classloader to list all the classes that are on the classpath. You may say I lie because Spring does browse the classes and finds automatically the implementation candidates. Spring actually cheats. I will tell you how it does. For now, accept that the classpath cannot be browsed. If you look at the documentation of the class ClassLoader you do not find any method that would return the array, stream or collection of the classes. You can get the array of the packages but you cannot get the classes even from the packages.

The reason for it is the level of abstraction how Java handles the classes. The class loader loads the classes into the JVM and the JVM does not care from where. It does not assume that the actual classes are in files. There are a lot of applications that load classes, not from a file. As a matter of fact, most of the applications load some of the classes from some different media. Also your programs, you just may not know it. Have you ever used Spring, Hibernate or some other framework? Most of these frameworks create proxy objects during run-time and the loads these objects from the memory using a special class loader. The class loader cannot tell you if there will ever be a new object created by the framework it supports. The classpath, in this case, is not static. There is even no such thing as classpath for these special class loaders. They find the classes dynamically.

Okay. Well said and described in detail. But then again: how does Spring find the classes? Spring actually makes a bold assumption. It assumes that the class loader is a special one: URLClassLoader. (And as Nicolai Parlog writes in his article it is not true with Java 9 any more.) It works with a classpath that contains URLs and it can return the array of URLs.

ServiceLoader does not make such an assumption and as such it does not browse the classes.

How does ServiceLoader Find a Class

The ServiceLoader can find and instantiate classes that implement a specific interface. When we call the static method ServiceLoader.load(interfaceKlass), it returns a “list” of classes that implement this interface. I used “list” between quotes because technically it returns an instance of ServiceLoader, which itself implements Iterable so we can iterate over the instances of the classes that implement the interface. The iteration is usually done in a for loop invoking the method load() following the (:) colon.

To successfully find the instances, the JAR files that contain the implementations should have a special file in the directory META-INF/service having the fully qualified name of the interface. Yes, the name has dots in it and there is no any specific file name extension, but nevertheless, it has to be a text file. It has to contain the fully qualified name of the class that implements the interface in that JAR file.

The ServiceLoader invokes the ClassLoader method findResources to get the URLs of the files and reads the names of the classes and then it asks the ClassLoader again to load those classes. The classes should have a public zero-argument constructor so that the ServiceLoader can instantiate each.

Having those files to contain the name of the classes to piggyback the class loading and instantiation using the resource load works, but it is not too elegant.
Java 9, while keeping the annoying META-INF/services solution introduced a new approach. With the introduction of Jigsaw, we have modules and modules have module descriptors. A module can define a service that a ServiceLoader can load and a module can also specify what services it may need to load via the ServiceLoader. This new way the discovery of the implementation of the service interface moves from textual resources to Java code. The pure advantage of it is that coding errors related to wrong names can be identified during compile time, or module load time to make failing code fail faster.

To make things more flexible or just to make them uselessly more complex (future will tell) Java 9 also works if the class is not an implementation of the service interface but does have a public static provider() method that returns an instance of the class that implements the interface. (Btw: in this case, the provider class even may implement the service interface if it wants, but it generally is a factory so why would it. Mind SRP.)

Sample Code

You can download a multi-module maven project from https://github.com/verhas/module-test.

This project contains three modules Consumer, Provider and ServiceInterface. The consumer calls the ServiceLoader and consumes the service, which is defined by an interface javax0.serviceinterface.ServiceInterface in the module ServiceInterface and implemented in the module Provider. The structure of the code can be seen in the following picture:

The module-info files contain the declarations:

module Provider {
    requires ServiceInterface;
    provides javax0.serviceinterface.ServiceInterface
      with javax0.serviceprovider.Provider;
}

module Consumer {
    requires ServiceInterface;
    uses javax0.serviceinterface.ServiceInterface;
}

module ServiceInterface {
    exports javax0.serviceinterface;
}

Pitfalls

Here I will tell you some of the stupid mistakes I made while I created this very simple example so that you can learn from my mistakes instead of repeating the same. First of all, there is a sentence in the Java 9 JDK documentation in the ServiceLoader that reads

In addition, if the service is not in the application module, then the module declaration must have a requires directive that specifies the module which exports the service.

I do not know what it wants to say, but what it means to me is not true. Maybe I misinterpret this sentence, which is likely.

Looking at our example the Consumer module uses something that implements the javax0.serviceinterface.ServiceInterface interface. This something is actually the Provider module and the implementation in it, but it is decided only during run time and can be replaced by any other fitting implementation. Thus it needs the interface and thus it has to have the requires directive in the module info file requiring the ServiceInterface module. It does not have to require the Provider module! The Provider module similarly depends on the ServiceInterface module and has to require it. The ServiceInterface module does not require anything. It only exports the package that contains the interface.

It is also important to note that neither the Provider nor the Consumer modules are not required to export any package. Provider provides the service declared by the interface and implemented by the class named after the with keyword in the module info file. It provides this single class for the world and nothing else. To provide only this class it would be redundant to export the package containing it and it would possibly unnecessarily open the classes that may happen in the same package but are module internal. Consumer is invoked from the command line using the –m option and that also it does not require the module to export any package.
The command like to start the program is

java -p Consumer/target/Consumer-1.0.0-SNAPSHOT.jar:
  ServiceInterface/target/ServiceInterface-1.0.0-SNA
  PSHOT.jar:Provider/target/Provider-1.0.0-SNAPSHOT.
  jar -m Consumer/javax0.serviceconsumer.Consumer

and it can be executed after a successful mvn install command. Note that the maven compiler plugin has to be at least version 3.6 otherwise, the ServiceInterface-1.0.0-SNAPSHOT.jar will be on the classpath instead of the module path during the compilation and the compilation will fail not finding the module-info.class file.

What is the point

The ServiceLoader can be used when an application is wired with some modules only during run-time. A typical example is an application with plugins. I myself ran into this exercise when I ported ScriptBasic for Java from Java 7 to Java 9. The BASIC language interpreter can be extended by classes containing public static methods and they have to be annotated as BasicFunction. The last version required that the host application embedding the interpreter list all the extension classes calling an API in the code. This is superfluous and not needed. The ServiceLoader can locate service implementation for which the interface (ClassSetProvider) is defined in the main program, and then the main program can call the service implementations one after the other and register the classes returned in the sets. That way the host application does not need to know anything about the extension classes, it is enough that the extension classes are put on the module path and that each provides the service.

The JDK itself also uses this mechanism to locate loggers. The new Java 9 JDK contains the System.LoggerFinder class that can be implemented as a service by any module and if there is an implementation that the ServiceLoader can find the method System.getLogger() will find that. This way the logging is not tied to the JDK, not tied to a library during compile time. It is enough to provide the logger during run-time and the application, the libraries the application uses and the JDK all will use the same logging facility.

With all these changes in the service loading mechanism, and making it part of the language from being piggy-backed on resource loading one may hope that this type of service discovery will gain momentum and will be used in broader scale as it was used before.