Java Deep

Pure Java, what else

Monthly Archives: April 2016

Comparing Golang with Java

First of all I would like to make a disclaimer. I am not an expert in Go. I started to study it a few weeks ago, thus the statements here are kind of first impressions. I may be wrong in some of the subjective areas of this article. Perhaps I will write some time a review of this one later. But until then, here it is and if you are a Java programmer, you are welcome to see my feelings and experiences and the same time you are more than welcome to comment and correct me if I am wrong in some statements.

Golang is impressive

As opposed to Java, Go is compiled to machine code and is executed directly. Much like C. Because this is not a VM machine it is very much different from Java. It is object oriented and the same time functional to some extent thus it is not only a new C with some automated garbage collection. It is somewhere between C and C++ if we think the world of programming languages is on a single line, which it is not. Using a Java programmer’s eyes some things are so much different that learning them is challenging and may give a deeper understanding on programming language structures and how objects, classes and all these things are … even in Java.

I mean that if you understand how OO is implemented in Go, you may understand also some of the reasons why Java has them different.

In short if you are impatient: do not let yourself freak out by the seemingly weird structure of the language. Learn it and it will add to your knowledge and understanding even if you do not have a project to be developed in Go.

GC and not GC

Memory management is a crucial point in programming languages. Assembly lets you do all things. ​Or rather it requires you to do it all. In case of C there are some support functions in the standard library but it is still up to you to free all memory that you have allocated before calling malloc. Automated memory management starts somewhere along C++, Python, Swift and Java. Golang is also in this category.

Python and Swift use reference counting. When there is a reference to an object the object itself holds a counter that counts the number of references that point to it. There are no pointers or references backward, but when a new reference gets the value and starts to reference an object the counter increases and when a reference becomes null/nil whatever or references another object the counter goes down. So it is known when the counter is zero, there are no references to the object and it can be discarded. The problem with this approach is that an object still may be unreachable while the counter is positive. There can be object circles referencing each other and when the last object in this circle is released from the static, local and otherwise reachable references then the circle starts to float in the memory like bubbles in water: the counters are all positive but the objects are unreachable. The Swift tutorial has a very good explanation of this behavior and how to avoid it. But the point is still there: you have to care about memory management somewhat.

In case of Java, other JVM languages (including the JVM implementation of Python) the memory is managed by the JVM. There is a full blown garbage collection that runs from time to time in one or more threads, parallel with the working threads or sometimes stopping those (a.k.a. stop the world) marking the unreachable objects, sweeping them and compacting the presumably scattered memory. All you have to worry about is performance if at all.

Golang is also in this category with a small little, tiny exception. It does not have references. It has pointers. The difference is crucial. It can be integrated with external C code and for performance reasons there is nothing like a reference registry in the run-time. The actual pointers are not known to the execution system. The memory allocated can still be analyzed to gather reachability information and the unused “objects” can still be marked and swept off, but memory can not be moved around to do the compacting. This was not obvious for me some time from the documentation and as I understood the pointer handling I was seeking for the magic that Golang wizards implemented to do the compacting. I was sorry to learn, they simply did not. There is no magic.

Golang has a garbage collection but this is not a full GC as in Java, there is no memory compaction. It is not necessarily bad. It can go a long way running servers very long time and still not having the memory fragmented. Some of the JVM garbage collectors also skip the compacting steps to decrease GC pause when cleaning old generations and do the compacting only as a last resort. This last resort step in Go is missing and it may cause some problem is rare cases. You are not likely to face the problem while learning the language.

Local variables

Local variables (and sometimes objects in newer versions) are stored on the stack in Java language. So are in C, C++ and in other languages where call-stack as such is implemented. Golang related to local variables is no exception, except…

Except that you can simply return a pointer to a local variable from a function. That was a fatal mistake in C. In case of Go the compiler recognizes that the allocated “object” (I will explain later why I use quotes) is escaping the method and allocates it accordingly so that survives the return of the function and the pointer will not point to an already abandoned memory location where there is no reliable data.

So this is absolutely legal to write:

package main

import (
	"fmt"
)

type Record struct {
	i int
}

func returnLocalVariableAddress() *Record {
	return &Record{1}
}

func main() {
	r := returnLocalVariableAddress()
	fmt.Printf("%d", r.i)
}

Closures

What is more you can write functions inside functions and you can return functions just like in a functional language (Go is a kind of functional language) and the local variables around it serve as variables in a closure.

package main

import (
	"fmt"
)

func CounterFactory(j int) func() int {
	i := j
	return func() int {
		i++
		return i
	}
}

func main() {
	r := CounterFactory(13)
	fmt.Printf("%d\n", r())
	fmt.Printf("%d\n", r())
	fmt.Printf("%d\n", r())
}

Function return values

Functions can return not only one single value, but multiple values. This seems to be a bad practice if not used properly. Python does that. Perl does that. It can be of good use. It is mainly used to return a value and a ‘nil’ or error code. This way the old habit of encoding the error into the returned type (usually returning -1 as error code and some non-negative value in case there is some meaningful return value as in C std library calls) is replaced with something much more readable.

Multiple values on the sides of an assignment is not only to functions. To swap two values you can write:

  a,b = b,a

Object Orientation

With closures and functions being first class citizens Go is at least object oriented as JavaScript. But it is actually more than that. Go lang has interfaces and structs. But they are not really classes. They are value types. They are passed by value and wherever they are stored in memory the data there is only the pure data and no object header or anything like that. structs in Go are very much like they are in C. They can contain fields, but they can not extend each other and they can not contain methods. Object orientation is approached a bit different.

Instead of stuffing the methods into the class definition you can specify when you define the method itself which struct it applies to. Structs can also contain other structs and in case there is no name for the field you can reference it by the type of it, which becomes its name implicitly. Or you can just reference a field or method as they belonged to the top struct.

For example:

package main

import (
	"fmt"
)

type A struct {
	a int
}

func (a *A) Printa() {
	fmt.Printf("%d\n", a.a)
}

type B struct {
	A
	n string
}

func main() {
	b := B{}
	b.Printa()
	b.A.a = 5
	fmt.Printf("%d\n", b.a)
}

This is almost or a kind of inheritance.

When you specify the struct on which the method can be invoked you can specify the struct itself or a pointer to it. If the method is applied to the struct then the method will access a copy of the caller struct (this struct is passed by value). If the method is applied to a pointer to the struct then the pointer will be passed (passed by reference kind of). In the latter case the method can also modify the struct (in this sense the structs are not value types, since value types are immutable). Either can be used to fulfill the requirement of an interface. In case of the example above Printa is applied to a pointer to the struct A. Go says that A is the receiver of the method.

Go syntax is also a bit lenient about structs and pointers to it. In C you can have a struct and you can write b.a to access the field of the struct. In case of a pointer to the structure in C you have to write b->a to access the same field. In case of a pointer b.a is a syntax error. Go says that writing b->a is pointless (you can interpret this literally). Why litter the code with -> operators when the dot operator can be overloaded. Field access in case of struct and, well field access through pointers. Very logical.

Because the pointer is as good as the struct itself (to some extent) you can write:

package main

import (
	"fmt"
)

type A struct {
	a int
}

func (a *A) Printa() {
	if a == nil {
		fmt.Println("a is nil")
	} else {
		fmt.Printf("%d\n", a.a)
	}
}

func main() {
	var a *A = nil
	a.Printa()
}

Yes, this is the point as a true hearted Java programmer you should not freak out. We did call a method on a nil pointer! How can that happen?

Type in in the variable and not the object

This is why I was using quotes writing “object”. When Go stores a struct it is a piece of memory. It does not have an object header (though it may, since it is a matter of implementation and not the language definition, but it reasonably does not). It is the variable that holds the type of the value. If the variable type is a struct then it is known already at compile time. If this is an interface then the variable will point to the value and the same time it will also reference the actual type it is having the value for.

If the variable a is an interface and not a pointer to a struct you can not do the same: you get runtime error. (Addition: As theo pointed out in his comment this is because the pointer variable does not have the type and Go runtime does not know which implementation of the polimorphic method to call. However you can have an interface variable being nil and still holding the reference to a specific type as theo shows in the example.)

Implementing interfaces

Interfaces are very simple in Go, and the same time very complex, or at least different from what they are in Java. Interfaces declare a bunch of functions that structs should implement if they want to be compliant with the interface. The inheritance is done the same way as in case of structs. The strange thing is that you need not specify in case of a struct that it implements an interface if it does. After all it is really not the struct that implements the interface, but rather the set of functions that use the struct or a pointer to the struct as receiver. If all the functions are implemented then the struct does implement the interface. If some of them are missing then the implementation is not complete.

Why do we need the ‘implements’ keyword in Java and not in Go? Go does not need it because it is fully compiled and there is nothing like a classloader that loads separately compiled code during run-time. If a struct is supposed to implement an interface but it does not then this will be discovered during compile time without explicitly classifying that the struct does implement the interface. You can overcome this and cause a run-time error if you use reflection (that Go has) but the ‘implements’ declaration would not help that anyway.

Go is compact

Go code is compact and not forgiving. In other languages there are characters that are simply useless. We got used to them during the last 40 years since C was invented and all other languages followed the syntax, but it does not necessarily mean that it is the best way to follow. After all we all know since C that the ‘trailing else’ problem is best addressed using the { and } around the code branches in the ‘if’ statement. (May be Perl was the first mainstream C-like syntax language that requested that.) However if we must have the braces there is no point to enclose the condition between parentheses. As you could see in the code above:

...
	if a == nil {
		fmt.Println("a is nil")
	} else {
		fmt.Printf("%d\n", a.a)
	}
...

there is no need and Go does not even allow it. You may also notice that there are no semicolons. You can use them, but you need not. Inserting them is a preprocessing step on the source code and it is very effective. Most of the time they are clutter anyway.

You can use ‘:=’ to declare a new variable and assign a value of it. On the right hand side the expression defines the type usually, so there is no need to write ‘var x typeOfX = expression‘. On the other hand if you import a package, assign a variable that you do not use afterwards: it is a bug. Since it can be detected during compile time it is a code error, compilation fails. Very smart. (Sometimes annoying when I import a package that I intend to use, and before referencing it I save the code and IntelliJ intelligently removes the import, just to help me.)

Threads and queues

Threads and queues are built into the language. They are called goroutines and channels. To start a goroutine you only have to write go functioncall() and the function will be started in a different thread. Although there are methods/functions in the standard Go library to lock “objects” the native multi-thread programming is using channels. Channel is a built-in type in Go that is a fixed size FIFO channel of any other type. You can push a value into a channel and a goroutine can pull it off. If the channel is full pushing blocks and in case the channel is empty the pull is blocking.

There are errors, no exceptions. Panic!

Go does have exception handling but this is not supposed to be used like in Java. Exception is called ‘panic’ and this is really to be used when there is some real panic in the code. In Java term it is similar to some throwable that ends with ‘…Error’. When there is some exceptional case, some error that can be handled this state is returned by the system call and the application functions are expected to follow similar pattern. For example

package main

import (
	"log"
	"os"
)

func main() {
	f, err := os.Open("filename.ext")
	if err != nil {
		log.Fatal(err)
	}
	defer f.Close()
}

the function ‘Open’ returns the file handler and nil, or nil and the error code. If you execute it on the Go Playground (click on the link above) you get the error displayed.

This is not really fitting the practice we got used to when programming in Java. It is easy to miss some error condition and write

package main

import (
	"os"
)

func main() {
	f , _ := os.Open("filename.ext")
	defer f.Close()
}

that just ignores the error. It is also cumbersome to check the possibility of error at each and every system or application call that may return error when we are interested in a longer chain of commands if any of those produced error and we do not really care which one.

No finally, defer instead

Closely coupled with the exception handling is the feature that Java implements with the try/catch/finally feature. In Java you can have code that is executed in a finally code no matter what. Go provides the keyword ‘defer’ that lets you specify a function call that will be invoked before the method returns even if there is/was a panic. This is a solution to the problem that gives you less options to abuse. You can not write arbitrary code to be executed deferred only a function call. In Java you can even have a return statement in the finally block or see a mess trying to handle the situation when the code to be executed in the finally block may also throw exception. Go is prone to that. I like that.

Other things…

that also may seem weird at first are like

  • public functions and variables are capitalized, there are no keywords like ‘public’, ‘private’
  • source code of libraries are to be imported into the source of the project (I am not sure I understood that properly)
  • lack of generics
  • code generation support built into the language in forms of comment directives (this is really a wtf)

In general Go is an interesting language. It is not a replacement for Java even on a language level. They are not supposed to serve the same type of tasks. Java is enterprise development language, Go is a system programming language. Go, just as well as Java, is continuously developing so we may see some change in that in the future.

Uptime

I was working five years in telecom industry where service availability was God. We had to reach five nines, that is 99.999% percent of the time the service had to work.

Just a simple calculation 99% availability means that the service may be out of service 3 and half day per year. 99.9% means an outage period close to 9 hours. 99.99% means barely an hour and 99.999% five minutes. Is it okay if the phone is out of service at most 5 minutes a year? Certainly. How about your online banking interface? How about a pace maker?

How many 9s do we need?

The point is that you should not aim five nines (or any other availability value) if you do not know why you need that. The bottom line is, as always money. To get to a level of availability has a cost label attached. The more nines you need in your uptime number the more cost you have. The latter nines costs exponentially more. On the other side you have the income, the reputation and all other things that count as money value and are adversely affected by downtime.

Telecom invests a lot of money to get to five nine. Does it pay off? If we only count the lost revenue during the downtime (people can not call and thus do not pay for the talk) then it does not. If we count the reputation, lost customers churning to competition then maybe. If we consider emergency calls: definitely.

Telecom has its values established long time ago and the five nines came from common knowledge distilled during a long time. When you design a service for the enterprise or for the public you do not have that. You have to assess what is worth doing.

Ten years ago I was studying economy. A friend was also studying there and at one of the micmac (mixed micro and macro economy subject) exam he barely passed, though he was usually performing excellent. Being shocked by the unexpected low performance the professor asked him about it. He said: “I was not only studying but also following in practice what you taught us. Invest no more than what is needed to reach the goals. Any more invenstment is wasted.”

Assume for now that we assessed the costs and benefits and we decided we need X nines.

Is more always better?

But how about accidental reaching more uptime than needed? Operation wasted money, invested too much into availability. This is certainly not good. It may also happen that by time small investments (e.g.: continous improvement of personnel, new technologies) lead to better uptime. If it costs nothing the higher uptime is extra gift. But if there is a significant difference between the aimed availability and the actual, the investment was definitely oversized. What will prevent operation to aim higher than what financially feasible is?

It is a difficult organizational question. This is not simply put money in on one end of the tube and get availability on the other. You can invest in many different things that lead to higher availability. You can educate your support people. You can buy higher quality, larger MTBF harware. You can invest in software quality. Some of these have lower cost others cost a lot. If operation reaches in practice higher availability than needed then they wasted money. But it is hard if not impossible to tell which money.

Measure the people performance on system uptime?

Should you measure your operation people on uptime? Certainly. That is a core competency they have: provide the aimed uptime. If the operation does not get the required uptime there will be consequences. You have to assess the reasons, design changes in the system, in the policies, in management hierarchy, but certainly something is to be changed. If there is no change the availability remains low.

If there are no consequences of underperformance on people certainly there will be on the company.

Should you incentive the higher than aimed uptime? Obviously no. May be not that obviously, but certainly: no. If you incentive then they will be motivated reach the goal. If they are motivated to reach something the company does not need they will spend company money to reach that. If you incentive anyone for getting X they will get X. Simple feedback loop. Simple. Ok. But if you do not incentive higher than needed availability then …

should you punish higher availability?

If people get punished for higher availability, operation will still be motivated to spend extra money to get higher availability and ruin it to the required level by means that they can control. (Simply switch off service “illegally” or claiming maintenance.) Even if you do not punish them, they are still motivated to spend extra money just to be safe. We get to the mix of incentives and punishments controlling quality and budget and companies sometime end up with extremely complex motivational schemes.

This dilemma is nothing special. This is a very general management topic. You can not measure what you really want to achieve in your organization. In case of operation you can not measure that oepration is spending the money the most possible wise way and deliver services reliably the level business need. You can not measure it simply because it can not be defined what most wise way is.

What we can do is measure something that is somehow related to what we really need. We start to measure X. The management problem is that if you measure X people will deliver X. So what is the solution? Measurement and culture. We know what the measurements are, we aim the direct goals we are measured on but perform the strategic, tactic and everyday tasks keeping an eye on company goals. That is the culture part. Without culture we are only robots, and robots don’t do good work.

Yet.

Random Ideas about Code Style

Some of the sentences of this article are ironic. Others are to be taken serious. It is up to the reader to separate them. Start with these sentences.

How long should a method be in Java?

This is a question I ask many times during interviews. There is no one best answer. There are programming styles and different styles are just different and many can be ok. I absolutely accept somebody saying that a method should be as short as possible, but I can also accept 20 to 30 lines methods. Above 30 lines I would be a bit reluctant.

“When I wrote this code only God and I understood it. Now only God does.”
Quote from unknown programmer. Last quarter of the XX. century.

The most important thing is that the code is readable. When you write the code you understand it. At least you think you understand what you wanted to code. What you actually coded may be a different story. And here comes the importance of readability as opposed to writeablity.

When you refactor a code containing some long method and you split up the method into many small methods you actually create a tree structure from a linear code. Instead of having one line after the other you create small methods and move the actual commands into those. After that the small methods are invoked from a higher level. Why does it make the code more readable?

First of all, because each method will have a name. That is what methods have and in Java we love camel cased talking names.

private void pureFactoryServiceImplementationIncomnigDtoInvoker(IncomingDto incomingDto){
  incomingDto.invoke();
}

But why is it any better than inlining the code and using comments?

// pure factory service implementation incoming dto invoker
incomingDto.invoke();

Probably that is because you have to type pureFactoryServiceImplementationIncomnigDtoInvoker twice? I know you will not type it twice. You will copy paste it or use some IDE auto-complete feature and for that reason the type replacing ‘Incoming’ to ‘Incomnig’ does not really matter.

When you split up the code into small methods the names are a form of comment.

Very much like what we do in unit tests using JUnit 4.0 or later. Old versions had to start the test methods with the literal test... but that was not a good idea. It was discovered long time ago. (I just wonder when Go will get there.) These days Groovy (and especially spock) lets us use whole sentences with spaces and new lines as method names in unit tests. But those method names fortunately should not be typed twice. They are listed and invoked by Junit via reflection and thus they really are what they really are: documentation.

So then the question still is: Why is tree structure better than linear?

Probably that is how our brains work. We look at a method and see that there are two-three method calls in that. There can be a simple branch or loop structure in it, perhaps one nested to the other but not much deeper than that. It is simple and if method names are selected well (I mean really in a good, meaningful and talking way), they are easy to understand, easy to read.

The we can, using the navigational aid of the IDE go to the methods and we can concentrate on the limited context of the method we are looking at. There is a rough rule:

You should be able to understand what a method does in 15 seconds.

If you stare at the method longer and you still have no idea what the method does it means it is too complex. Some people are better apprehending the structure of the code, others are challanged in that. I am in the latter group, so when I review code I many times prefer smaller and simpler methods. I refuse the code to be merged or I refactor it myself depending on the role, the actual task I perform. Juniors I work with think that I am strict and picky. The truth is I am slow. The complexity of the code should be compatible with the weakest chain: any one of the team (including imaginable future maintainers of the next coming 20 years till the code is finally deleted from production) should understand and maintain the code easily.

Many times looking at git history I see refactoring ping-pong. For example the method

Result getFrom(SomeInput someInput){
  Result result = null;
  if( someInput != null ){
    result = someInput.get();
  }
  return result;
}

is refactored to

Result getFrom(SomeInput someInput){
  final Result result;
  if( someInput == null ){
    result = null;
  }else{
    result = someInput.get();
  }
  return result;
}

and later the other way around.

One is shorter, while the other one is more declarative. Is the repetitive refactoring back and forth a problem? Most probably is, but not for sure. If it happens only a few times and by different people then this is not something to worry about too much. When the code gets refactored the developer feels the code more attached to him/herself. A more “it is my code” feeling, which is important. Even though a good developer is not afraid to touch and modify any code. (what could happen? test fail? so what? test? what test?) Note that not all developers are good developers. But what is a good developer after all? It is relative. There are better developers and there are not so good. If you see only good developers who are better than you, then probably you are lucky. Or not.