Java Deep

Pure Java, what else

Best Interview Answer Ever

I am not absolutely sure that this one deserves the title, but it should run for the prize. I swear it is not just made up. It actually happened during a java technical interview where the candidate applied for a senior position.

Q: When you work in a group following agile methodology and scrum what is the velocity of a team?

A: This is 6.1

Code is like public toilet

We, developers, work with legacy code. There are only two types of code: legacy or dead. Before your code dies it becomes legacy unless it was born dead. Legacy code must be dealt with and legacy code is never perfect. Was it perfect when it was created? Perhaps it was. A piece of code can erode because of change of the environment. This environment can be other code, integration interfaces, developer experience. The code that looked okay one day may seem to be disgusting next day or a year later. The code that one team created may just not seem okay to another team. There may be something “real” issue that the industry generally agrees today as bad practice like having 5000 lines god objects, or some discussable things like having many return statements in an otherwise readable and neatly written method. One group may agree to use a variable retval and have one return statement, other schools do not care so long as long other readability issues are not present.

There is one thing that all schools agree: whenever you fix some code in a legacy application you should not leave bigger mess behind you than the mess was when you arrived. What is more: you are encouraged to clean the code up. If there is a bug in a method you fix write a unit test for the fix. If there were no unit tests for the method or class then create. If the method is a mess refactor it (but before that you create unit tests). If the method uses some other methods do not be shy to modify the other method so that their cooperation is cleaner. Do not be shy to modify the argument list, method signature. And if you do that you are also obligated to clean up the mess there as well.

You see the analogy implied in the title. Sometimes when drive is not that strong I just turn around without hesitation and hope that the next place I will find will be less abominable. I am talking about code. But sometimes I just can not do that. After all I am a programmer, that is my job: create good quality code from what is available. If the source is requirement definition we have to deal with those. If we have use case definitions, or vague user stories we go agile and do the best. If the source is mess, we clean up.

That is the life of a coder.

If you do it do it right

This is a philosophical or ethical command. Very general. It is something like “fail fast”. The reason it came up to my mind is that I wanted to compile and release License3j using Java 8 and JavaDoc refused to compile during release build.

This package is a simple license manager, which has some established user base who require that I keep up with the new versions of BouncyCastle. It itself being a cryptography package should not be outdated and programs are encouraged to use the latest version to avoid security issues. When I executed mvn release:prepare I got many errors:

[ERROR] * <p>
[ERROR] /Users/verhasp/github/License3j/src/main/java/ error: unexpected end tag: </p>
[ERROR] * </p>
[ERROR] /Users/verhasp/github/License3j/src/main/java/ warning: no @param for args
[ERROR] public static void main(String[] args) throws Exception {
[ERROR] /Users/verhasp/github/License3j/src/main/java/ warning: no @throws for java.lang.Exception
[ERROR] public static void main(String[] args) throws Exception {
[ERROR] /Users/verhasp/github/License3j/src/main/java/com/verhas/licensor/ warning: no @param for expiryDate
[ERROR] public void setExpiry(final Date expiryDate) {
[ERROR] /Users/verhasp/github/License3j/src/main/java/com/verhas/licensor/ warning: no description for @throws
[ERROR] * @throws IOException
[ERROR] /Users/verhasp/github/License3j/src/main/java/com/verhas/licensor/ warning: no description for @throws

New JavaDoc Wants You DIR

The errors are there because the java doc of License3j is a bit sloppy. Sorry guys, I created the code many years ago and honestly it is not only the java doc that could be improved. As a matter of fact one of the unit tests rely on network and the reachability of GitHub. (Not anymore though, I fixed that.)

The new Java version 8 is very strict regarding to JavaDoc. As you can see on the “Enhancements in Javadoc, Java SE 8” page of ORACLE:

The javadoc tool now has support for checking the content of javadoc comments for issues that could lead to various problems, such as invalid HTML or accessibility issues, in the files that are generated by javadoc. The feature is enabled by default, and can also be controlled by the new -Xdoclint option. For more details, see the output from running “javadoc -X”. This feature is also available in javac, although it is not enabled by default there.

To get the release working I had the choice to fix the JavaDoc or to use the configuration


in pom.xml. (Source is stackoverflow.)

But You Just Won’t DIR

You can easily imagine that you will opt for the second option when you are under time pressure. You fix the issue modifying your pom.xml or other build configuration and forget about it.

But you keep on thinking about why it is the way like that? Why is the new tool strict by default? Is it a good choice? Will it drive people to create better JavaDoc?

(Just for now I assume that the aim of the new behavior was to drive programmers to create better JavaDoc documentation and not simply to annoy us.)

I am a bit suspicious that this alone will be sufficient to improve documentation. Programmers will:

  • Switch off the lint option.
  • Delete JavaDoc from the source.
  • Write some description that Java 8 will accept but is generally meaningless.

or some of them will just write correct java doc. Some of them who were writing it well anyway and will be helped by the new strictness. How many of us? 1% or 2%? The others will just see it as a whip and try to avoid. We would need carrot instead. Hey, bunnies! Where is the carrot?

Test JavaBeans

The first question is not how to test JavaBeans. The first question is not even if you need to test JavaBeans. The very first question is whether we need JavaBeans at all.

I am generally against JavaBeans, but when it comes to Java EE and services you can hardly avoid them. And that is the most I can tell about the first question.

The second question is if we need testing them. JavaBeans are usually generated code and the rule is that generated code should not be tested. Testing a generated code implicitly tests the code generator and not the generated code. If there is any error then the generator is faulty. And the generators have their own unit tests. Hopefully. I am, perhaps, still kind of junior having such beliefs.

So what is the conclusion: Shouldn’t you test JavaBeans?


Why? Because the assumption that JavaBeans are generated may be false. They are generated at first, but they are source code and source code has long life. Source code gets modified. By programmers. By humans. Humans make mistakes. Programmers are humans. More or less. You get it?

The usual modification to JavaBeans are small. Minor. Trivial. Trivial code should not be tested. Paying careful attention and generally lacking functionality (is setting and getting a real functionality?) makes tests unnecessary. WROGN again, just like my spelling wrong. Did you notice that at first? Probably not. That is also the case with the errors in the setters and getters. There may be only one single letter of typing. No problem, integrated development environments will do the code completion and voila! The typo proliferates and becomes legacy in the whole corporation. Does it cost? Sooner or later it does.

Code is used from JSP, where the editor does not spot the mistake, BeanUtils does not find the getter or setter and need extra code, but the names are already carved into stone and are guarded by dead souls. You try to change it and application developers in the corporate will bang on your door claiming back their good old cozy typo infested setter and getter.

What errors can there be? Presumably any as far as the possibility is concerned, but the most typical are:

  • Name of the setter or getter has typo and thus does not follow the JavaBeans standard.
  • Setter alters something else not only the field it is supposed to.
  • Setter does something and you can not get back that via the getter.

To test these, however, you should not write real unit test code. You should probably create some unit test class files, but they should not contain more than some declarative code. To do the real test you should use some libraries. A good start article is at stackoverflow. They mention Bean Matchers or Reflection Test Utilities. You can also give a try to JavaBeanTestRunner which tests that the setters do not mess up fields they should not, but does not check methods like toString(), or equals().

Generics Names

Generics type parameter names usually contain one, single capital case character. If you start to read the official ORACLE documentation on generics the first example is

 * Generic version of the Box class.
 * @param <T> the type of the value being boxed
public class Box<T> {
    // T stands for "Type"
    private T t;

    public void set(T t) { this.t = t; }
    public T get() { return t; }

The name of the generic type is T. One single letter, not too meaningful and generally against other identifier naming styles. It is widely used only for generics. Strange. What is the reason for that?

Here are the arguments I have heard so far:

  • A class or method does not need many type variable names, so you will not run out of the letters of the ABC.
    • Based on that reasoning we should also use one character method names? There should not bee too many methods in a class so we will not run out of the alphabet there as well.
  • It is not a problem that the one character does not inherently explain the type, since there is JavaDoc. You can explain what the type name actually stands for.
    • And we should also forget everything we have learned about clean code and variable naming. Code structure defines what the code does and since that is what it really is code structure is up-to-date. Naming (variables, methods, etc.) usually follow the change of code structure, since naming helps the programmer. Even though naming is many times outdated especially in case of boolean variables. It suggest many times just the very opposite what the real meaning is. JavaDoc is maintained and corrected some time after the code and the unit tests are finished, debugged and polished. In practice “some time after” means: never. JavaDoc is outdated, not available when reading the code as promptly as name itself thus should contain information you can not include into the code structure and well naming. Why would type names be an exception?
  • Types names bearing one character makes them distinguishable from variable, method and class names as well as constant names.
    • It is a good point. Type names have to be distinguishable from variable, method and class names. But I see no strong point why we should use different name casing from constants. There is no place where you could use a constant and a type or where it would really be confusing. They are used in totally different places, in different syntactical positions. If this is such a big pain in the donkey why do not we suffer from it in case of method and variable names? Method names are followed by () characters in Java? Not anymore as we get to Java 8!
  • But Google Code Style allows you to use multi character type names.
    • Oh yes. And it says that if you use multi character type names the name should have a T postfix, like RequestT, FooBarT. Should I also prefix String variables with the letters sz and Integers with i as in Hungarian Notation?

What then?

If you do not like the single character naming for generics you can name them with _ or $ prefix. This is a suggestion that you can see on stackoverflow. As for me: it is strange. Using the $ makes some “heimlich”, warm feeling reminding me of my youth when I was programming Perl. I do not do that anymore and for good reasons. Times changed, technology changed, I changed.

The $ is usually used by the compiler and some code generators to name generated fields and methods. Your use of $ on the Java source level may cause some difficulty for the compiler to find the appropriate name in case there is some naming collision, but the current version of the java compilers are fairly error prone in this respect. They just keep trying to find an appropriate name with some simple algorithm until they find a name that is not colliding with any Java source code name, so this will not be a problem.

Underscore: well, is really something that we used in old times instead of space. On old matrix printers the underscore character was printed so badly that you could not distinguish it from space and thus this was an ugly trick to have multi word variable names. Because of this underscore at the start of the name is a total anti-pattern imho, practically naming two things using the same name. It is almost like if the underscore character was not there at all.

You can also use T_ prefix as it is the convention in C++ and in C# (I am not familiar with those too much, I am not sure about that). But this is just as ugly as it is without the T.

My taste is to use meaningful names with the same conventions we follow in case of constants. For example to use

    public final class EventProducer<LISTENER extends IEventListener<EVENT>,EVENT> 
           implements IEventProducer<LISTENER, EVENT> {

instead of

    public final class EventProducer<L extends IEventListener<E>,E> 
            implements IEventProducer<L,E> {

Even though that is my personal, senior, professional, expert opinion I do not use it. Why? Because I work in an enterprise environment in a team. The gain to use something more readable than the official default is not as high as the damage of a debate and disagreement would be. In addition to that new hires have to get used to the local style and it also costs money. Using the usable, but not optimal global style is better than using a good local style. Live with it.

Can we go global?

You can try. That is the most I can say. It would have been better if the original suggestion setting the coding standard were better than the 1960’s style one letter approach, but this is already history. The damage has been done. And this is nothing comparable to the damage caused by the brilliant idea introducing null into OO.

We will live with the one character generics so long as long Java is alive. And since I am almost 50, it is going to be a longer period than my life span. Note that COBOL is still alive. We should expect nothing less from Java.

Using Junit Test Name

Name your tests

When we create Junit test usually there is no practical use of the name of the method. The Junit runner uses reflection to discover the test methods and since version 4 you are not restricted to start the name of the method with test anymore. The name of the test methods are there for documentation purpose.

There are different styles people follow. You can name your test in the given_Something_when_Something_then_Something style I also followed for a while. Other schools start the name of the method with the world should to describe what the tested object “should” do. I do not really see why this is significantly better than starting the name of the method with test. If all methods starts with the same prefix, then this is only noise. These days I tend to name the methods as simple statements about what the SUT does.

How to Access the Test Name?

Technically you are free to name your methods so long as long the names are unique. The name is usually not used in the test and the outcome of the test should not depend on the actual name of the test method. Even though there is a way supported by Junit to access the name of the method.

If you have a Junit Rule

public TestName name = new TestName();

you can refer to the object name in your test getting the name of the actual method as

String testName = name.getMethodName();


What can we use this for?

Sometimes the unit under test creates some huge structure that can be serialized as a binary or text file. It is a usual practice to run the test once, examine the resulted file and if that is OK then save it to use for later comparison. Later test executions compare the actual result with the one that was saved and checked by the developer.

A similar scenario may be used in case of integration tests when the external systems are stubbed and their responses can be fetched from some local test data file instead of querying the external service.

In such situations the name of the test can be used to create a file name storing the test data. The name of the test is unique and makes it easy to pair the data with the test needing it. I used this approach in the jscglib library. This library provides a fluent API to create Java source code. The tests contain some java builder pattern director code and then the resulted source code is saved into a file or compared to an already stored result.

To save the file the aux method getTargetFileName was used

	private String getTargetFileName() {
		String testName = name.getMethodName();
		String fileName = "target/resources/" + testName + ".java";
		return fileName;

To get the name of the resource the method getResourceName was used:

	private String getResourceName() {
		String testName = name.getMethodName();
		return testName + ".java";

After that loading and saving the generated Java code was a breeze:

	private void saveGeneratedProgram(String actual) throws IOException {
		File file = new File(getTargetFileName());
		FileOutputStream fos = new FileOutputStream(file);
		byte[] buf = actual.getBytes("utf-8");
		fos.write(buf, 0, buf.length);

	private String loadJavaSource() {
		try {
			String fileName = getResourceName();
			InputStream is = this.getClass().getResourceAsStream(fileName);
			byte[] buf = new byte[3000];
			int len =;
			return new String(buf, 0, len, "utf-8");
		} catch (Exception ie) {
			return null;

Generally that is the only example I know that you can use the name of a test method for other than documentation.

What you should not use the name for

There is a saying in my language: “Everybody is good in something. At least demonstrating failure.” The following example demonstrates such a failure.

I have seen a code that was encoding test data into the name of the test method. Access to the test method name was also implemented in a strange way. The programmer probably did not know that there is a supported way getting the name of the method. This lack of knowledge may have stopped him or her to do the evil, but this person was a genius. The test test method was calling static method of a helper class. This static method was throwing an exception, it also caught it and looked into the stack trace to identify the name of the caller method.

After it had access to the name the code applied regular expression to extract the values from the method name.


I do not know what the intention of the developers of Junit was giving us the class TestName. Probably there was some use case that needed the feature. I am certain that they did not provide the function because it was possible to do so. If you do not know what the API you provide is good for, you probably should not provide it just because you can. Novice programmers will use it wrong way more than good.

Also on the other hand: if you see something in a API that can be used it does not mean that you should use the feature. You should better understand the aim of the feature what it was designed for and use it accordingly.

Writing unit tests is more important than naming them. Debate on the naming of unit tests is useless so long as long there are no unit tests.

Write unit tests as many as needed, but not more.

Break Single Responsibility Principle

Single Responsibility Principle (SRP) is not absolute. It exists to help the code maintainability and readability. But from time to time you may see solutions, patterns that break the SRP and are kind of OK. This is also true for other principles, but this time I would like to talk about SRP.

Singleton breaks SRP

The oldest and simplest pattern that breaks SRP is the singleton pattern. This pattern restricts the creation of an object so that there is a single instance of a certain class. Many thinks that singleton actually is an antipattern and I also tend to believe that this is better to use some container to manage the lifecycle of the objects than hard coding singletons or other home made factories. The anti pattern-ness of singleton generally comes from the fact that it breaks the SRP. A singleton has two responsibilities:

  1. Manage the creation of the instance of the class
  2. Do something that is the original responsibility of the class

You can easily create a singleton that does not violate SRP keeping the first responsibility and drop the second one

public class Singleton {
    private static final Singleton instance = new Singleton();
    public static Singleton getInstance() { return instance; }
    private Singleton() {}

but there is not much use of such a beast. Singletons are simple and discussed more than enough in blogs. Let me look at something more complex that breaks SRP.

Mockito breaks SRP

Mockito is a mocking framework, which we usually use in unit tests. I assume that you are familiar with mocking and mockito. A typical test looks like the following:

import static org.mockito.Mockito.*;
List mockedList = mock(List.class);

(sample is taken from the Mockito page, actually mixing two examples). The mock object is created using the static call

List mockedList = mock(List.class);

and after it is used for three different things:

  1. Setup the mock object for its mocking task.
  2. Behave as a mock mocking the real life object during testing.
  3. Help verification of the mock usage.

The call


sets up the mock object. The calls


use the core responsibility of the mock object and finally the lines


act as verification.

These are three different tasks not one. I get the point that they are closely related to each other. You can even say that they are just three aspects of a single responsibility. One could argue that verification only uses the mock object as a parameter and it is not the functionality of the mock object. The fact is that the mock object keeps track of its mock usage and acts actively in the verification process behind the scenes. Okay, okay: these all may be true, more or less. The real question is: does it matter?

So what?

Does the readability of the code of Mockito suffer from treating the SRP this way? Does the usability of the API of Mockito suffer from this?

The answer is definite NO for both of the questions. The code is as readable as it gets (imho it is more readable than many other open source projects) but it is not affected by the fact that the mock objects have multiple responsibilities. As for the API you can even say more. It is readable and usable even more with this approach. Former mocking frameworks used strings to specify the method calls like


(fragment from the page), which is less readable and error prone. A typo in the name of the method is discovered test run time instead of compile time.

What is the morale? Don’t be dogmatic. Care programming principles, since they are there to help you to write good code. I do not urge anyone to ignore them every day. On the other hand if you feel that some of the principles restrict you and your code would be better without it, do not hesitate to consider writing a code that breaks the principles. Discuss it with your peers (programming is a team work anyway) and come to a conclusion. The conclusion will be that you were wrong considering to break SRP in 90% of the cases. In 10%, however, you may come up with brilliant ideas.

Sometimes you need tuples in Java. Or not.

Tuple is an ordered list of elements. In Java that is List<Object>. Even though it exists there is an extra need from programmers for tuples. You can see that there is a package named javatuples that defines tuples that contain 1, 2 up to 10 elements. (Btw: There is a class in the package named Unit that contains one element. WAT?) There is a long discussion on stackoverflow about tuples.

But where does it come from? Why do some Java programmers long for tuples? The answer is that tuples are part of the language constructs of other languages. They date back to such old ages that only program archeologist can remember. Languages like LISP use tuples. Python is also lurking here from the last century. Why did they implement a feature like tuples? Perhaps it seemed to be a good idea. If it was not coming from the past, Java developer would not long for it. Which itself is a hint: do you really need it? But the fact is fact:

Java misses tuples. THIS IS A LIE!

Which is not true for two reasons:

  1. There is no need for tuples.
  2. There is a built in type in Java that can handle tuple

There is an interface named java.util.Map.Entry that is there just to hold two objects and there is a simple implementation java.util.AbstractMap.SimpleEntry. Thus Java does not misses tuples and neither do I.

The Magic Setter Antipattern

Setters and getter are evil. When the JavaBean definition was created it seemed to be a good idea. But they do a lot of harm to the Java community. Not as many as the null pointer generally, but enough.

The very first thing is that many juniors believe that implementing setters and getter (hey, it is just a few click in Eclispe!) does encapsulation properly. Should I detail why it does not?

The other things is that using setters and getters are against YAGNI. YAGNI stands for You aren’t gonna need it. It means that you should not develop a code that the project does not need now. The emphasis is on the word now. Many programmers tend to develop code that extends the actual functionality and does something more general than actually needed. Even though in principle it could be valuable: in most of the practical cases it is not. The code becomes more complex and on the other hand project never develops to the stage where the generalization the programmer created is needed.

Setters and getter are a clean, simple and very broadly used example of YAGNI. If the setter does nothing else but sets the value of a field and if the getter does nothing else than returns the value of the field then why do we need them at all? Why do not alter the access modifier of the field to the value that the setter and the getter has (probably public)?

The answer usually is that you may need to implement some more complex functionality either in the getter or in the setter and then you need not change the “interface” the bean provides. The words “you may need to implement” suggests that this is YAGNI. What is more: it is dangerous. Implementing the setters and the getters you implicitly expose the implementation of the class. What does a setter do? Sets the value of a field. For example setBirthDate() by definition sets the field birthDate. And this is the way your users, who write the code calling the setter will think about it. You may document in your JavaDoc that setBirthDate() actually “specifies” a birth date but that is too late. You named the method to be a setter and that is it. Nobody reads JavaDoc. API rulez.

Later, when you change your code and setBirthDate() does not only sets birth date or does not even do that the users will not be notified. The change is silent and you just changed the interface you implicitely provided for your users. There will be bugs, debug session, new releases and this is good, because this creates workplace (feel the irony, please). If the users were provided direct access to the fields moving the fields from public to behind the barricades of private access modifier would cause compile time errors. Perhaps it is only a weird personal taste, but I prefer compile time errors more than bugs. They are easier (read: cheaper) to fix.

Do not worry: you still can modify your API. You can still remove your setters and getters from the set of methods and force fellow programmers to fix their code that implicitly assumed that setters really set and getters get. Please do.

What the actual story was making me write this?

Once upon a time there was an object that could do something. To perform its task you could set either field aaa or the field bbb, but never both. The application was developed this way and all was good for more than six years. Some day a young programmer princess came riding on white horse wanting to make the world to be a better place. He wanted to make the before mentioned class safer and modified the setter setAaa() to null the field bbb and the other way around. Unit tests shined. Coverage was 100%. (I should learn not to lie.) He submitted a new release of the library and a few weeks later he finished his internship and went back to school. That time the applications started to use the new version of the library. And they failed miserably because of this small change and rolled back to the old version. We all had hard time and summing up, well, the corporate spent approximately one person year of work caused by the simple change not to mention the amount of hair that programmers tore off from their head.

Why did the programs failed? There was some code that cloned an object containing the fields aaa and bbb in a way like this:

    BadBean newBadBean = new BadBean();

You see the point. In the new bean the field aaa was always null.

Now that you have read this article you will never try to create a clever setter. I know you won’t! You know the saying: Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live. Behold!

Quick, Cheap, Quality: choose two

Quick Cheap Fast

You can select two

It is old and common wisdom. Even printed on billboard of the mechanics shop where my car is usually repaired. And as with many well known facts: they are ignored many times.

Although his is a wider issue, and many statements I am going to make in this article is valid for other industries, I will focus on IT and more specifically on software development. I do that, because this is where I have experience and my interest. The software industry is new as compared to building constructions or car repair and the customers many times have unreal expectations. To make the situation worse bad developers and companies harvest the obliviousness of customers cheating them. This leads to misery and many times customers learn that software vendors are unreliable and they just tend not to believe what we say even when they face an honest vendor.

There will be no liberation of the world in this article. There is no such article that could do that, not even such a minor aspect of our lives as customer vendor relationship in software development.

As a vendor choose two you can control

After you realized that there is no free lunch and selected two of the above, it is still not the end of the story. You can say, for example, that you want quality software and fast, no matter what it costs. If you can not control the time or the quality you may get only one or none of the three above.

Controlling money

Controlling the money is the easiest these days so long as long there is a healthy society where contracts are obeyed and executed. You have a contract with the software vendor and you pay no more than the contract price. I have seen software projects when the software was not ready by time and the vendor demanded more money to finish. The customer had two choices. Pay the extra and get the software with some delay or start the whole project with another vendor from scratch. Both of them meant extra cost. Extra payment: obvious. Project start over: investment into vendor relationship on technical level and time to market money lost.

Looking at the story you can say the vendor simply blackmailed the customer. Real life is not that simple many times and a story can not be told in a paragraph as complex as the life is. I was lucky not to be involved in the whole story since I could see that there was foul play on both sides. It is kind of culture how we play these games.

Controlling time

Perhaps time is the second in this list. At least it can easily be measured since the invention of the chronograph. Controlling is, however, more than just measure. As you could see in the example above facing the fact that the software is not ready at the end of the project is a disaster.

To control the time you should use mile stones and project deliverables that show the progress of the project. It may be so important that in some project I have experienced delivery of artifacts that were not needed in the long run and from the position of the developers it seemed to be waste of money. We were asked, and paid of course, to develop a version of a software with an extremely simple UI that was not appropriate for use in the production version. Not a single line of code of this UI was used in the final version. Even though this was capable to demonstrate that the back-end of the software was partially developed it was possible for the customer to check some of the features and there was no room for slide ware lies. (We actually did not intend to lie, but even if we wanted there was no room: the demo was working on a partially developed back-end.)

On the other hand the strong control of time may lead to something that hardly can be named “control” in the noble sense of management. Tracking the progress, requiring constant administration and deliverables only for the time tracking may lead to unjustifiable overhead cost. Since the developers are usually not knowledgable about management they do not usually understand the importance of the measurement of their work and this may lead to frustration adversely affect motivation and thus work.

As is always: there has to be a good balance. There is no easy way to find the balance though. As one of my junior coworker once said when there was too much control and checkpoint in the project: “It is controlling without con.” (for those who have brain challenges: trolling)

Controlling quality

This is the hardest. It is not even trivial to measure the quality. There are great practices in software development that can help the measurement of the quality of a developing software product but they are not measuring quality itself in purest form. They measure something that may, if we are lucky, correlate with the quality of the software. We can measure the number of bugs discovered during a test phase. We can use sonar, PMD, findbugs, checkstyle on the code and follow strict coding conventions. These alone however does not assert that you will have good quality.

It is also a misconcept to aim for bug free software. There can be bugs in the code. The aim is to have a software that fits the business needs. If a software is targeted toward prospect customers, general internet audience who get distracted from a menu structure not intuitive enough then the UI has to be designed accordingly tested with trial audience and fine tuned. It incurs cost. If the cost is less than the business gain: go for it.

If you work for a company and the intranet application is used by internal users, who spend 8 hours a day using the application they learn to use a menu system even that is not too intuitive. I have experienced a software development project where we wanted to make the menu structure more intuitive and the users refused the change wanting the old, bad style structure back: they have already learnt where to click, what key combos to press.


All of these subjects deserve more discussion than a short article or just a section in an article. Here they are more as a discussion ice breaker than something to learn from like a tutorial. Just some ideas and fragments that you can add to in comments if you like.


Get every new post delivered to your Inbox.

Join 1,022 other followers