DevDays Vilnius 2018

Starting today I participate DevDays in Vilnius for the whole week.

Come and see me on May 24, 2018, in Hall 2 at 14:00 and also at 15:55.

I will talk about

14:00 Prevent Hacking with Modules in Java 9 and
15:55 Comparing Golang and Understanding Java Value Types

https://devdays.lt/peter-verhas/

https://devdays.lt/

Advertisements

Java getting back to the browser?

Betteridge’s law of headlines apply.

Lead-in

This article talks about WebAssembly and can be read to get the first glimpse of it. At the same time, I articulate my opinion and doubts. The summary is that WebAssembly is an interesting approach and we will see what it will become.

Java in the Browser, the past

There was a time when we could run Java applets in the browsers. There were a lot of problems with it, although the idea was not total nonsense. Nobody could tell that the future of browser programmability is not Java. Today we know that JavaScript was the winner and the applet as it is deprecated in Java 9 and is going to be removed from later Java versions. This, however, does not mean that JavaScript is without issues and it is the only and best possible solution for the purpose that a person can imagine.

JavaScript has language problems, there are a lot of WTF included in the language. The largest shortage, in my opinion, is that it is a single language. Developers are different and like different languages. Projects are different best solved by different programming languages. Even Java would not so immensely successful without the JVM infrastructure supported by so many different languages. There are a lot of languages that run on the JVM, even such a crap as ScriptBasic.

Now you can say that the same is true for the JavaScript infrastructure. There are other languages that are compiled to JavaScript. For example, there is TypeScript or there is even Java with the GWT toolkit. JavaScript is a target language, especially with asm.js. But still, it is a high level, object-oriented, memory-managed language. It is nothing like a machine code.

Compiling to JavaScript invokes the compiler once, then the JavaScript syntax analyzer, internal bytecode and then the JIT compiler. Isn’t it a bit too many compilers till we get to the bits that are fed into the CPU? Why should we download the textual format JavaScript to the browser and compile it into bytecode each time a page is opened? The textual format may be larger, though compression technologies are fairly advanced, and the compilation runs millions of times on the client computer emitting a lot of carbon into the air, where we already have enough, no need for more.

(Derail: Somebody told me that he has an advanced compression algorithm that can compress any file into one bit. There is no issue with the compression. Decompression is problematic though.)

WebAssembly

Why can’t we have some bytecode based virtual machine in the browser? Something that once the JVM was for the applets. This is something that the WebAssembly guys were thinking in 2015. They created WebAssembly.

WebAssembly is a standard program format to be executed in the browser nearly as fast as native code. The original idea was to “complement JavaScript to speed up performance-critical parts of web applications and later on to enable web development in other languages than JavaScript.” (WikiPedia)

Today the interpreter runs in Firefox, Chromium, Google Chrome, Microsoft Edge and in Safari. You can download a binary program to the browser and you can invoke it from JavaScript. There is also some tooling supporting developing programs in “assembly” and also on higher level languages.

Structure

The binary web assembly contains blocks. Each block describes some characteristics of the code. I would say that most of the blocks are definition and structure tables and there is one, which is the code itself. There is a block that lists the functions that the code exports, and which can be invoked from JavaScript. Also, there is a block that lists the methods that the code wants to invoke from the JavaScript code.

The assembly code is really assembly. When I started to play with it I had some nostalgic feeling. Working with these hex codes is similar to programming the Sinclair ZX80 in Z80 assembly when we had to convert the code manually to hex on paper and then we had to “POKE” the codes from BASIC to the memory. (If you understand what I am talking about you are seasoned. I wanted to write ‘old’ but my editor told me that is rude. I am just kidding. I have no editor.)

I will not list all the features of the language. If you are interested, visit the WebAssembly page. There is a consumable documentation about the binary format.

There are, however, some interesting features that I want to talk about to later express my opinions.

No Objects

The WebAssembly VM is not an object-oriented VM. It does not know objects, classes or any similar high-level structures. It really looks like some machine language. It has some primitive types, like i32, i64, f32, f64 and that it is. The compiler that compiles high-level language has to use these.

No GC

The memory management is also up to the application. It is assembly. There is no garbage collector. The code works on a (virtually) continuous memory segment that can grow or shrink via system call and it is totally up to the application to decide which code fragment uses which memory address.

Two Stacks

There are two stacks the VM works with. One is the operation stack for arithmetic operations. The other one is the call stack. There are functions that can call each other and return to the caller. The call sequence is stored in a stack. This is a very usual approach. The only shortage is that there is no possibility to mark the call stack and purge it when an exception happens. The only possibility to handle try/catch programming structure is to generate code before and after function calls that check for exception conditions and if the exception is not caught on the caller function level then the code has to return to the higher level caller. This way the exception handling walks through the call stack with the extra generated code around each function call. This slows down not only the exception handling but also the function calls.

Single Thread

There is no threading in WebAssembly.

Support, Tooling

The fact that most of the browsers support WebAssembly is one half of the bread. There have to be developer tools supporting the concept to have code that can be executed.

There is an LLVM backed compiler solution so technically any language that is compiled to LLVM should be compilable to WebAssembly and run in the browser. There is a C compiler in the tooling and you can also compile RUST to WebAssembly. There is also a textual format in case you want to program directly in assembly level.

Security

Security is at least questionable. First of all, WebAssembly is binary, therefore it is not possible, or at least complex to look at the code and analyze the code. The download of the code does not require channel encryption (TLS) therefore it is vulnerable to MITM attack. Similarly, WebAssembly does not support code signature that would assert that the code was not tampered with since being generated in the (hopefully protected) development environment.

WebAssembly runs in a sandbox, just like JavaScript or like Flash was running. Fairly questionable architecture from the security point of view.

You can read more on the security questions in this article.

Roadmap

WebAssembly was developed for to years to reach a Minimal Viable Product (MVP) that can be used as a PoC. There are features, like garbage collection, multi-thread support, exception handling support, SIMD type instructions, DOM access support directly from WebAssembly, which are developed after MVP.

Present and Future

I can say after playing like a weekend with WebAssembly that it is an interesting and nice toy. In its current state, it is a toy, nothing more. Without the features planned after MVP, I see only one viable use case: WebAssembly is the perfect tool to deploy malicious mining code on the client machines. In addition to that, any implementation flaw in the engine is a security risk. Note that these security risks come from a browser functionality that gives no value to the average user. You can disable WebAssembly in some of the browsers. It is a little worrisome that it is enabled by default, although it is needed only for early adopters for PoC and not commercial projects. If I were paranoid I would say that the browser vendors, like Google, have hidden agenda with the WebAssembly engine in the browser.

I am afraid that we see no security issues currently with WebAssembly only because technology is new and IT felons have not learned yet the tools. I am almost certain that the security holes are currently lurking in the current code waiting to be exploited. Disable WebAssembly in your browser till you want to use it. Perhaps in a few years (or decades).

The original aim was to amend JavaScript. With the features after MVP, I strongly believe that WebAssembly will rather aim to replace JavaScript than amend it. There will a time when we will be able to write applications to run in the browser in Golang, Swift, Java, C, Rust or whatever language we want to. So looking at the question in the title “will Java get back to the browser?” the answer is definitely NO. But some kind of VM technology, JIT, bytecode definitely will sometime in the future.

But not yet.

Comparing files in Java

I am creating a series of video tutorials for PACKT about network programming in Java. There is a whole section about Java NIO. One sample program is to copy a file via raw socket connection from a client to a server. The client reads the file from the disk, and the server saves the bytes as they arrive, to disk. Because this is a demo, the server and the client are running on the same machine and the file is copied from one directory to the exact same directory but a different name. The proof of the pudding is eating it: the files have to be compared.

The file I wanted to copy was created to contain random bytes. Transferring only text information can leave sometimes some tricky bug lurking in the code. The random file was created using the simple Java class:

package packt.java9.network.niodemo;

import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Random;

public class SampleMaker {
    public static void main(String[] args) throws IOException {
        byte[] buffer = new byte[1024 * 1024 * 10];
        try (FileOutputStream fos = new FileOutputStream("sample.txt")) {
            Random random = new Random();
            for (int i = 0; i < 16; i++) {
                random.nextBytes(buffer);
                fos.write(buffer);
            }
        }
    }
}

Using IntelliJ comparing files is fairly easy, but since the files are binary and large this approach is not really optimal. I decided to write a short program that will not only signal that the files are different but also where the difference is. The code is extremely simple:

package packt.java9.network.niodemo;

import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.IOException;

public class SampleCompare {
    public static void main(String[] args) throws IOException {
        long start = System.nanoTime();
        BufferedInputStream fis1 = new BufferedInputStream(new FileInputStream("sample.txt"));
        BufferedInputStream fis2 = new BufferedInputStream(new FileInputStream("sample-copy.txt"));
        int b1 = 0, b2 = 0, pos = 1;
        while (b1 != -1 && b2 != -1) {
            if (b1 != b2) {
                System.out.println("Files differ at position " + pos);
            }
            pos++;
            b1 = fis1.read();
            b2 = fis2.read();
        }
        if (b1 != b2) {
            System.out.println("Files have different length");
        } else {
            System.out.println("Files are identical, you can delete one of them.");
        }
        fis1.close();
        fis2.close();
        long end = System.nanoTime();
        System.out.print("Execution time: " + (end - start)/1000000 + "ms");
    }
}

The running time comparing the two 160MB files is around 6 seconds on my SSD equipped Mac Book and it does not improve significantly if I specify a large, say 10MB buffer as the second argument to the constructor of BufferedInputStream. (On the other hand, if we do not use the BufferedInputStream then the time is approximately ten times more.) This is acceptable, but if I simply issue a diff sample.txt sample-copy.txt from the command line, then the response is significantly faster, and not 6 seconds. It can be many things, like Java startup time, code interpretation at the start of the while loop, till the JIT compiler thinks it is time to start to work. My hunch is, however, that the code spends most of the time reading the file into the memory. Reading the bytes to the buffer is a complex process. It involves the operating system, the device drivers, the JVM implementation and they move bytes from one place to the other and finally we only compare the bytes, nothing else. It can be done in a simpler way. We can ask the operating system to do it for us and skip most of the Java runtime activities, file buffers, and other glitters.

We can ask the operating system to read the file to memory and then just fetch the bytes one by one from where they are. We do not need a buffer, which belongs to a Java object and consumes heap space. We can use memory mapped files. After all, memory mapped files use Java NIO and that is exactly the topic of the part of the tutorial videos that are currently in the making.

Memory mapped files are read into the memory by the operating system and the bytes are available to the Java program. The memory is allocated by the operating system and it does not consume the heap memory. If the Java code modifies the content of the mapped memory then the operating system writes the change to the disk in an optimized way, when it thinks it is due. This, however, does not mean that the data is lost if the JVM crashes. When the Java code modifies the memory mapped file memory then it modifies a memory that belongs to the operating system and is available and is valid after the JVM stopped. There is no guarantee and 100% protection against power outage and hardware crash, but that is very low level. If anyone is afraid of those then the protection should be on the hardware level that Java has nothing to do anyway. With memory mapped files we can be sure that the data is saved into the disk with certain, very high probability that can only be increased by failure tolerant hardware, clusters, uninterruptible power supplies and so on. These are not Java. If you really have to do something from Java to have the data written to disk then you can call the MappedByteBuffer.force() method that asks the operating system to write the changes to disk. Calling this too often and unnecessarily may hinder the performance though. (Simple because it writes the data to disk and returns only when the operating system says that the data was written.)

Reading and writing data using memory mapped files is usually much faster in case of large files. To have the appropriate performance the machine should have significant memory, otherwise, only part of the file is kept in memory and then the page faults increase. One of the good things is that if the same file is mapped into the memory by two or more different processes then the same memory area is used. That way processes can even communicate with each other.

The comparing application using memory mapped files is the following:

package packt.java9.network.niodemo;

import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;

public class MapCompare {
    public static void main(String[] args) throws IOException {
        long start = System.nanoTime();
        FileChannel ch1 = new RandomAccessFile("sample.txt", "r").getChannel();
        FileChannel ch2 = new RandomAccessFile("sample-copy.txt", "r").getChannel();
        if (ch1.size() != ch2.size()) {
            System.out.println("Files have different length");
            return;
        }
        long size = ch1.size();
        ByteBuffer m1 = ch1.map(FileChannel.MapMode.READ_ONLY, 0L, size);
        ByteBuffer m2 = ch2.map(FileChannel.MapMode.READ_ONLY, 0L, size);
        for (int pos = 0; pos < size; pos++) {
            if (m1.get(pos) != m2.get(pos)) {
                System.out.println("Files differ at position " + pos);
                return;
            }
        }
        System.out.println("Files are identical, you can delete one of them.");
        long end = System.nanoTime();
        System.out.print("Execution time: " + (end - start) / 1000000 + "ms");
    }
}

To memory map the files we have to open them first using the RandomAccessFile class and ask for the channel from that object. The channel can be used to create a MappedByteBuffer, which is the representation of the memory area where the file content is loaded. The method map in the example maps the file in read-only mode, from the start of the file to the end of the file. We try to map the whole file. This works only if the file is not larger than 2GB. The start position is long but the size of the area to be mapped is limited by the size of an Integer.

Generally this it… Oh yes, the running time comparing the 160MB random content files is around 1sec.

Rating Articles

My last article was commented as “Weakest article in a long time, and it shows a prime example of typical inch pincher stuff certain people still like to make a fuzz about.”

To ease this type of feedback for you I switched on the rating functionality. (I do not know why I did not do that before.)

You can simply click on the stars at the top of the article to express you like or dislike an article. (It is not shown on the opening page, you have to click on the title of the article to get to the article’s own page.) This will help me to write better articles and it will also help the other readers to skip articles when they are not that good.

Format the code

PACKT recently published the book Mastering Java 9 that I co-authored. The author who started to write the book could not finish, and PACKT had to find people who agreed to write some of the chapters. I previously successfully finished Programming Java 9 by Example and PACKT asked me if I could help. Finally, we wrote the book with Dr. Edward Lavieri.

To my surprise, he used C# bracket placing in the book. That is, the { character is placed not on the end of the line but rather at the start of the next line.

if( myCondition )
{
  do something;
}

This is against the usual Java coding convention.

When I started programming C in 1984 I started to use this bracket placement. That time Internet was not reachable and there were no tools to share opinions in such a wide audience like now. It was also not clear how much formatting means when we are coding. What is more, it was not even clear how much readability is important.

Later I learned that the convention in case of C programming usually is to put the { character at the end of the line.

if( myCondition ){
  do something;
  }

In case of C, it is not always the case. Some programmer teams use the first bracket placement, while others use the later.

In case of Java, the C# style bracket placement is almost extinct. Perhaps it would be interesting to see a statistics over the sources available on GitHub to see how big percent of the Java code uses this or that.

Why is it interesting? Because

You should not use C# formatting when you program Java code!

There are exceptions as always. For example, the company you work for insists on the other coding style. In that case, the company made the bad decision and although it may be okay or even great working for this company, this is certainly a company smell. (See code smell.)

Why should not you use the other style?

Because of readability. Readability is subjective and still, in this case, I dare say that the Java style bracket placement is more readable. Hold your horses before ranting, I will explain.

Что было раньше, курица или яйцо?

For most of you, the above sentence is not readable. I can read it because I grew up in eastern Europe where learning Russian was mandatory. It is also readable for most of the Russian people. They can not only read it, but they can even understand it. For them, it is just as readable as the English sentence

What was sooner, the chicken or the egg?

for us.

Readablility depends on what we got used to. Java programmers got used to

                      {
  }

bracket placement. C# programmers use the other style. If we see a code that is formatted differently you may oversee some aspect of the code that you would not skip otherwise. The difference is subtle but still it is there. When you hire Java developers you are more likely to find good Java developers for a reasonable price who use and who are accustomed to the industry standard than one who is accustomed to the C# style.

You can find here and there some who are also fluent in C# as well and can read Cyrillic… ops… C# “characters” but it is less common than pure Java developers.

The bottom line is that the TCO of the code will be lower during the lifetime of code development and maintenance if you follow the industry standards. It is that simple.

P.S.: Buy the books!

Raid, backup and archive

This is a short tutorial about the similarities and the differences of redundant storage, backup, and archive functionality. I felt a need to create this short introduction because I realized that many IT professionals do not know the difference between these operations and many times mix them or using the wrong approach for some purpose.

I personally once was the witness of a backup at a Hungarian bank, which was stored on a partition of a raid set disk, which also held the operational data. Raid controller failure happened. Backup was unusable. Technically it was not a backup. A Digital Equipment Corp. engineer was restoring the allocation bits of the raid set for two weeks to restore account data. Although neither the bank, which shall not be named, nor Digital do not exist anymore I am more than convinced that similar backups still do.

What these methods are

Redundant storage, backup, and archive copy operational data. They do that aiming more stability in operation. The copied data is stored in a redundant way and in case there is some event that needs data deleted or corrupted previously the copied version is still available. The differences between these data redundancy increasing strategies are

  • (NEED) the type of event that creates the need for the deleted data
  • (CAUSE) the type of event that causes the deletion of the data
  • (DISCOVERY) how the data loss or need is recognized
  • (HOW) how the actual copy is created and stored

Redundant storage

Redundant storage copies the data online and all the time. (HOW) When there is some change in the data the redundant information is created in some storage media as soon as the hardware and software make it possible. The action of copy is not batched. It is not waiting for a bunch of data to be copied together. It is copied as soon as possible.

The actual implementation is usually some RAID configuration. A RAID configuration two or more same-size disks parallel. In case of two disks, anything written on one is written to the other at the same time. When reading one of the disks is used, which makes reading twice as fast regarding the data transfer assuming that the data transfer bus between the disk and the computer is fast enough. Seek time in case of rotating (non-SSD) disks is not improved.

When there are three or more disks the writing is a bit different. In this situation whenever a bit is changed on one disk then the bit is also changed on the last disk of the RAID set. The RAID controller keeps the bits of the last disk of the set to be the XOR value of the same bits on the other disks. That way the data is “partially copied”.

In case of a hardware failure, the RAID solutions usually allow the faulty disk to be replaced without switching off the disk system. The controller will automatically reconstruct the missing data.

(NEED) Redundant storage keeps the data available during normal operation and prevents data loss in case of (CAUSE) hardware failure. All the data is copied all the time and in case there is a failure the data recovery causes a few milliseconds in data access delay. Data redundancy recovery may be longer in the range o few minutes or hours, but the data is available unless there are multiple failures.

(DISCOVERY) The data loss is automatically detected because the redundancy is checked upon every read.

Backup

(HOW) Backup copies data usually to offline media. The copy is started at regular intervals, like every hour, day or week. When a backup is executed files that changed since the last backup are copied to the backup media. Backup can cover the application data or can cover the whole operating system. Many times operating system is not backed up. When there is a need to restore the information OS is installed fresh from installation media and only the application files are restored from the backup storage. This may require smaller backup storage, faster backup and restore execution.

There are different techniques called full, partial and differential backups. Creating backups without purging old data would infinitely grow the size of the backup media. This would not only cost ever increasing money buying the media but the burden to catalog and keep the old media would also mean a huge operational cost burden. To optimize the costs old backups are deleted with special strategy. As an example, a strategy can require to create a backup every day and delete the backups that are older than one week except those that were created on Monday. Backups older than a month can also be deleted except those that were created on the first Monday of the month and similarly backups older than a year may be deleted except the backup of January and June.

(NEED) The data stored on the backup media is needed if it is discovered that some data was deleted. (CAUSE) The reason for the deletion may be human error or sabotage. A user of the system mistyped the name of a record to be deleted or thought that the data is not needed anymore and later it is realized that it was a mistake. Sabotage is a deliberate action when somebody having access to the system deletes or alters data as a wrongdoing. In either case, the data is ruined by human interaction. It may also be possible that the data is ruined by disaster (flood, fire, earthquake) or some hardware error that causes much more severe damage than a simple disk error.

The backup media itself can also be the target of the sabotage. Disaster can also damage backup media. For this reason, backup is usually stored offline disconnected from the main operating system and many times the media is transferred to a different location.

When data needs to be restored the backup media has to be copied back to the operational components to restore the information that was deleted or altered. The restore process needs to connect the backup media, or a copy of the backup media to the operational components and copy the data back. The connecting is usually a manual process because anything automated can be the target for a sabotage. Because of manual nature of the process restoring a backup is usually a long time. It may be a few minutes, hours or days. Usually the older the backup the more time is needed to get back the operational data.

Archive

(HOW) The creation of an archive is very similar to the creation of a backup. We copy some of the data to some offline media and we store it for a long time. The archive copy is usually done on data that was not yet archived. Archive this way is kind of incremental usually. (CAUSE) Archive stores data, which is deleted from the system deliberately by the normal operational processes, because it is not needed by the operation. The archive is not aiming to provide a backup source for data that is found to be deleted accidentally. The data stored in the archive is never needed for normal operation. (DISCOVERY/NEED) The archive data is needed for extraordinary operation.

For example, the mobile company does not need the cell information of individual phones for a long time. It is an operation data stored in the HLR and VLR database and this information is not even backed up usually. In case there is data loss getting the actual information is faster gathering it from the GSM network than restoring from a backup being probably fairly outdated (mobile phones move in the meantime). On May 9, 2002, some robbers killed 8 people in the small Hungarian town Mor. A few years later when the investigation got to the point to examine the mobile phone movements in the area the data was not available as operational data but it was available in the archives. Analysing GSM cell data to support the operation of homicide investigation is not a normal operation of a telecom company.

You archive data that you are obligated to store and archive by law, you suspect that you may need for some unforeseeable future purpose. Records that describe the business level operations and transactions are archived usually.

Comparison


As you can see from the above one of the method cannot replace the other. They supplement each other and if you do not implement one of them then you can expect that the operation will be sub-par.

The example in the intro explains clearly why redundant storage does not eliminate the need for a backup. Similarly archiving cannot be replaced by an otherwise proper backup solution. The error, in this case, will not face you so harsh and evident because of the long-term nature of the archive. Nevertheless, an archive is not the same as backup.

In some cases, I have seen the use of archive as the source of data backup. This is a forgivable sin only when the data loss has already happened and the archive still has the data you need. On the other hand, the archive does not contain all the operational data, only those that have long-term business relevance.

Summary

This is a short introduction to redundant storage, backup, and archive. Do not think that understanding what is written here makes you an expert in any of these topics. Each of the topics is a special expert area with tons of literature to learn and loads of exercises to practice and ace. On the other hand, now you should understand the basic roles of these methods, what they are good for and what they are not good for, as well as you should know the most important differences to avoid the mistakes that others have already committed.

There is no need to repeat old mistakes. Commit new ones!