Java Enums: You have grace, elegance and power and this is what I Love!

While Java 8 is coming, are you sure you know well the enums that were introduced in Java 5? Java enums are still underestimated, and it’s a pity since they are more useful than you might think, they’re not just for your usual enumerated constants!

Java enum is polymorphic

Java enums are real classes that can have behavior and even data.

Let’s represent the Rock-Paper-Scissors game using an enum with a single method. Here are the unit tests to define the behavior:

@Test
public void paper_beats_rock() {
	assertThat(PAPER.beats(ROCK)).isTrue();
	assertThat(ROCK.beats(PAPER)).isFalse();
}
@Test
public void scissors_beats_paper() {
	assertThat(SCISSORS.beats(PAPER)).isTrue();
	assertThat(PAPER.beats(SCISSORS)).isFalse();
}
@Test
public void rock_beats_scissors() {
	assertThat(ROCK.beats(SCISSORS)).isTrue();
	assertThat(SCISSORS.beats(ROCK)).isFalse();
}
And here is the implementation of the enum, that primarily relies on the ordinal integer of each enum constant, such as the item N+1 wins over the item N. This equivalence between the enum constants and the integers is quite handy in many cases.
/** Enums have behavior! */
public enum Gesture {
	ROCK() {
		// Enums are polymorphic, that's really handy!
		@Override
		public boolean beats(Gesture other) {
			return other == SCISSORS;
		}
	},
	PAPER, SCISSORS;

	// we can implement with the integer representation
	public boolean beats(Gesture other) {
		return ordinal() - other.ordinal() == 1;
	}
}

Notice that there is not a single IF statement anywhere, all the business logic is handled by the integer logic and by the polymorphism, where we override the method for the ROCK case. If the ordering between the items was not cyclic we could implement it just using the natural ordering of the enum, here the polymorphism helps deal with the cycle.


You can do it without any IF statement! Yes you can!

This Java enum is also a perfect example that you can have your cake (offer a nice object-oriented API with intent-revealing names), and eat it too (implement with simple and efficient integer logic like in the good ol’ days).

Over my last projects I’ve used a lot enums as a substitute for classes: they are guaranted to be singleton, have ordering, hashcode, equals and serialization to and from text all built-in, without any clutter in the source code.

If you’re looking for Value Objects and if you can represent a part of your domain with a limited set of instances, then the enum is what you need! It’s a bit like the Sealed Case Class in Scala, except it’s totally restricted to a set of instances all defined at compile time. The bounded set of instances at compile-time is a real limitation, but now with continuous delivery, you can probably wait for the next release if you really need one extra case.

Well-suited for the Strategy pattern

Let’s move to to a system for the (in-)famous Eurovision song contest; we want to be able to configure the behavior on when to notify (or not) users of any new Eurovision event. It’s important. Let’s do that with an enum:

/** The policy on how to notify the user of any Eurovision song contest event */
public enum EurovisionNotification {

	/** I love Eurovision, don't want to miss it, never! */
	ALWAYS() {
		@Override
		public boolean mustNotify(String eventCity, String userCity) {
			return true;
		}
	},

	/**
	 * I only want to know about Eurovision if it takes place in my city, so
	 * that I can take holidays elsewhere at the same time
	 */
	ONLY_IF_IN_MY_CITY() {
		// a case of flyweight pattern since we pass all the extrinsi data as
		// arguments instead of storing them as member data
		@Override
		public boolean mustNotify(String eventCity, String userCity) {
			return eventCity.equalsIgnoreCase(userCity);
		}
	},

	/** I don't care, I don't want to know */
	NEVER() {
		@Override
		public boolean mustNotify(String eventCity, String userCity) {
			return false;
		}
	};

	// no default behavior
	public abstract boolean mustNotify(String eventCity, String userCity);

}

And a unit test for the non trivial case ONLY_IF_IN_MY_CITY:

@Test
public void notify_users_in_Baku_only() {
	assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", "BAKU")).isTrue();
	assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", Paris")).isFalse();
}

Here we define the method abstract, only to implement it for each case. An alternative would be to implement a default behavior and only override it for each case when it makes sense, just like in the Rock-Paper-Scissors game.

Again we don’t need the switch on enum to choose the behavior, we rely on polymorphism instead. You probably don’t need the switch on enum much, except for dependency reasons. For example when the enum is part of a message sent to the outside world as in Data Transfer Objects (DTO), you do not want any dependency to your internal code in the enum or its signature.

For the Eurovision strategy, using TDD we could start with a simple boolean for the cases ALWAYS and NEVER. It would then be promoted into the enum as soon as we introduce the third strategy ONLY_IF_IN_MY_CITY. Promoting primitives is also in the spirit of the 7th rule “Wrap all primitives” from the Object Calisthenics, and an enum is the perfect way to wrap a boolean or an integer with a bounded set of possible values.

Because the strategy pattern is often controlled by configuration, the built-in serialization to and from String is also very convenient to store your settings.

Perfect match for the State pattern

Just like the Strategy pattern, the Java enum is very well-suited for finite state machines, where by definition the set of possible states is finite.

A baby as a finite state machine (picture from www.alongcamebaby.ca)

Let’s take the example of a baby simplified as a state machine, and make it an enum:

/**
 * The primary baby states (simplified)
 */
public enum BabyState {

	POOP(null), SLEEP(POOP), EAT(SLEEP), CRY(EAT);

	private final BabyState next;

	private BabyState(BabyState next) {
		this.next = next;
	}

	public BabyState next(boolean discomfort) {
		if (discomfort) {
			return CRY;
		}
		return next == null ? EAT : next;
	}
}

And of course some unit tests to drive the behavior:

@Test
public void eat_then_sleep_then_poop_and_repeat() {
	assertThat(EAT.next(NO_DISCOMFORT)).isEqualTo(SLEEP);
	assertThat(SLEEP.next(NO_DISCOMFORT)).isEqualTo(POOP);
	assertThat(POOP.next(NO_DISCOMFORT)).isEqualTo(EAT);
}

@Test
public void if_discomfort_then_cry_then_eat() {
	assertThat(SLEEP.next(DISCOMFORT)).isEqualTo(CRY);
	assertThat(CRY.next(NO_DISCOMFORT)).isEqualTo(EAT);
}

Yes we can reference enum constants between them, with the restriction that only constants defined before can be referenced. Here we have a cycle between the states EAT -> SLEEP -> POOP -> EAT etc. so we need to open the cycle and close it with a workaround at runtime.

We indeed have a graph with the CRY state that can be accessed from any state.

I’ve already used enums to represent simple trees by categories simply by referencing in each node its elements, all with enum constants.

Enum-optimized collections

Enums also have the benefits of coming with their dedicated implementations for Map and Set: EnumMap and EnumSet.

These collections have the same interface and behave just like your regular collections, but internally they exploit the integer nature of the enums, as an optimization. In short you have old C-style data structures and idioms (bit masking and the like) hidden behind an elegant interface. This also demonstrate how you don’t have to compromise your API’s for the sake of efficiency!

To illustrate the use of these dedicated collections, let’s represent the 7 cards in Jurgen Appelo’s Delegation Poker:

public enum AuthorityLevel {

	/** make decision as the manager */
	TELL,

	/** convince people about decision */
	SELL,

	/** get input from team before decision */
	CONSULT,

	/** make decision together with team */
	AGREE,

	/** influence decision made by the team */
	ADVISE,

	/** ask feedback after decision by team */
	INQUIRE,

	/** no influence, let team work it out */
	DELEGATE;

There are 7 cards, the first 3 are more control-oriented, the middle card is balanced, and the 3 last cards are more delegation-oriented (I made that interpretation up, please refer to his book for explanations). In the Delegation Poker, every player selects a card for a given situation, and earns as many points as the card value (from 1 to 7), except the players in the “highest minority”.

It’s trivial to compute the number of points using the ordinal value + 1. It is also straightforward to select the control oriented cards by their ordinal value, or we can use a Set built from a range like we do below to select the delegation-oriented cards:

	public int numberOfPoints() {
		return ordinal() + 1;
	}

	// It's ok to use the internal ordinal integer for the implementation
	public boolean isControlOriented() {
		return ordinal() < AGREE.ordinal();
	}

	// EnumSet is a Set implementation that benefits from the integer-like
	// nature of the enums
	public static Set DELEGATION_LEVELS = EnumSet.range(ADVISE, DELEGATE);

	// enums are comparable hence the usual benefits
	public static AuthorityLevel highest(List levels) {
		return Collections.max(levels);
	}
}

EnumSet offers convenient static factory methods like range(from, to), to create a set that includes every enum constant starting between ADVISE and DELEGATE in our example, in the declaration order.

To compute the highest minority we start with the highest card, which is nothing but finding the max, something trivial since the enum is always comparable.

Whenever we need to use this enum as a key in a Map, we should use the EnumMap, as illustrated in the test below:

// Using an EnumMap to represent the votes by authority level
@Test
public void votes_with_a_clear_majority() {
	final Map<AuthorityLevel, Integer> votes = new EnumMap(AuthorityLevel.class);
	votes.put(SELL, 1);
	votes.put(ADVISE, 3);
	votes.put(INQUIRE, 2);
	assertThat(votes.get(ADVISE)).isEqualTo(3);
}

Java enums are good, eat them!

I love Java enums: they’re just perfect for Value Objects in the Domain-Driven Design sense where the set of every possible values is bounded. In a recent project I deliberatly managed to have a majority of value types expressed as enums. You get a lot of awesomeness for free, and especially with almost no technical noise. This helps improve my signal-to-noise ratio between the words from the domain and the technical jargon.

Or course I make sure each enum constant is also immutable, and I get the correct equals, hashcode, toString, String or integer serialization, singleton-ness and very efficient collections on them for free, all that with very little code.

(picture from sys-con.com – Jim Barnabee article)”]
The power of polymorphism

The enum polymorphism is very handy, and I never use instanceof on enums and I hardly need to switch on the enum either.

I’d love that the Java enum is completed by a similar construct just like the case class in Scala, for when the set of possible values cannot be bounded. And a way to enforce immutability of any class would be nice too. Am I asking too much?

Also <troll>don’t even try to compare the Java enum with the C# enum…</troll>

Read More

The untold art of Composite-friendly interfaces

The Composite pattern is a very powerful design pattern that you use regularly to manipulate a group of things through the very same interface than a single thing. By doing so you don’t have to discriminate between the singular and plural cases, which often simplifies your design.

Yet there are cases where you are tempted to use the Composite pattern but the interface of your objects does not fit quite well. Fear not, some simple refactorings on the methods signatures can make your interfaces Composite-friendly, because it’s worth it.

Always start with examples

Imagine an interface for a financial instrument with a getter on its currency:

public interface Instrument {
  Currency getCurrency();
}

This interface is alright for a single instrument, however it does not scale for a group of instruments (Composite pattern), because the corresponding getter in the composite class would look like (notice that return type is now a collection):

public class CompositeInstrument {
  // list of instruments...

  public Set getCurrencies() {...}
}

We must admit that each instrument in a composite instrument may have a different currency, hence the composite may be multi-currency, hence the collection return type. This breaks the goal of the Composite pattern which is to unify the interfaces for single and multiple elements. If we stop there, we now have to discriminate between a single Instrument and a CompositeInstrument, and we have to discriminate that on every call site. I’m not happy with that.

The composite pattern applied to a lamp: same plug for one or several lamps

The brutal approach

The brutal approach is to generalize the initial interface so that it works for the composite case:

public interface Instrument {
  Set getCurrencies() ;
}

This interface now works for both the single case and the composite case, but at the cost of always having to deal with a collection as return value. In fact I’m not that sure that we’ve simplified our design with this approach: if the composite case is not used that often, we even have complicated the design for little benefit, because the returned collection type always goes on our way, requiring a loop every time it is called.

The trick to improve that is just to investigate what our interface is really used for. The getter on the initial interface only reveals that we did not think about the actual use before, in other words it shows a design decision “by default”, or lack of.

Turn it into a boolean method

Very often this kind of getter is mostly used to test whether the instrument (single or composite) has something to do with a given currency, for example to check if an instrument is acceptable for a screen in USD or tradable by a trader who is only granted the right to trade in EUR.

In this case, you can revamp the method into another intention-revealing method that accepts a parameter and returns a boolean:

public interface Instrument {
  boolean isInCurrency(Currency currency);
}

This interface remains simple, is closer to our needs, and in addition it now scales for use with a Composite, because the result for a Composite instrument can be derived from each result on each single instrument and the AND operator:

public class CompositeInstrument {
  // list of instruments...

  public boolean isInCurrency(Currency currency) {
     boolean result;
     // for each instrument, result &= isInCurrency(currency);
     return result;
  }
}

Something to do with Fold

As shown above the problem is all about the return value. Generalizing on boolean and their boolean logic from the previous example (‘&=’), the overall trick for a Composite-friendly interface is to define methods that return a type that is easy to fold over successive executions. For example the trick is to merge (“fold”) the boolean result of several calls into one single boolean result. You typically do that with AND or OR on boolean types.

If the return type is a collection, then you could perhaps merge the results using addAll(…) if it makes sense for the operation.

Technically, this is easily done when the return type is closed under an operation (magma), i.e. when the result of some operation is of the same type than the operand, just like ‘boolean1 AND boolean2‘ is also a boolean.

This is obviously the case for boolean and their boolean logic, but also for numbers and their arithmetic, collections and their sets operations, strings and their concatenation, and many other types including your own classes, as Eric Evans suggests you favour “Closure of Operations” in his book Domain-Driven Design.

Fire hydrants: from one pipe to multiple pipes (composite)

Turn it into a void method

Though not possible in our previous example, void methods work very well with the Composite pattern: with nothing to return, there is no need to unify or fold anything:

public class CompositeFunction {
  List functions = ...;

  public void apply(...) {
     // for each function, function.apply(...);
  }
}

Continuation-passing style

The last trick to help with the Composite pattern is to adopt the continuation passing style by passing a continuation object as a parameter to the method. The method then sets its result into it instead of using its return value.

As an example, to perform search on every node of a tree, you may use a continuation like this:

public class SearchResults {
   public void addResult(Node node){ // append to list of results...}
   public List getResults() { // return list of results...}
}

public class Node {
  List children = ...;

  public void search(SarchResults sr) {
     //...
     if (found){
         sr.addResult(this);
     }
     // for each child, child.search(sr);
  }
}

By passing a continuation as argument to the method, the continuation takes care of the multiplicity, and the method is now well suited for the Composite pattern. You may consider that the continuation indeed encapsulates into one object the process of folding the result of each call, and of course the continuation is mutable.

This style does complicates the interface of the method a little, but also offers the advantage of a single allocation of one instance of the continuation across every call.

That's continuation passing style (CC Some rights reserved by 2011 BUICK REGAL)

One word on exceptions

Methods that can throw exceptions (even unchecked exceptions) can complicate the use in a composite. To deal with exceptions within the loop that calls each child, you can just throw the first exception encountered, at the expense of giving up the loop. An alternative is to collect every caught exception into a Collection, then throw a composite exception around the Collection when you’re done with the loop. On some other cases the composite loop may also be a convenient place to do the actual exception handling, such as full logging, in one central place.

In closing

We’ve seen some tricks to adjust the signature of your methods so that they work well with the Composite pattern, typically by folding the return type in some way. In return, you don’t have to discriminate manually between the single and the multiple, and one single interface can be used much more often; this is with these kinds of details that you can keep your design simple and ready for any new challenge.

Follow me on Twitter! Credits: Pictures from myself, except the assembly line by BUICK REGAL (Flickr)

Read More

Thinking functional programming with Map and Fold in your everyday Java

In functional programming, Map and Fold are two extremely useful operators, and they belong to every functional language. If the Map and Fold operators are so powerful and essential, how do you explain that we can do our job using Java even though the Java programming language is lacking these two operators? The truth is that you already do Map and Fold when you code in Java, except that you do them by hand each time, using loops.

Disclaimer: I’m not a reference in functional programming and this article is nothing but a gentle introduction; FP aficionados may not appreciate it much.

You’re already familiar with it

Imagine a List<Double> of VAT-excluded amounts. We want to convert this list into another corresponding list of VAT-included amounts. First we define a method to add the VAT to one single amount:

public double addVAT(double amount, double rate) {return amount * (1 + rate);}

Now let’s apply this method to each amount in the list:

public List<Double> addVAT(List<Double> amounts, double rate){
  final List<Double> amountsWithVAT = new ArrayList<Double>();
  for(double amount : amounts){
     amountsWithVAT.add(addVAT(amount, rate));
  }
  return amountsWithVAT;
}

Here we create another output list, and for each element of the input list, we apply the method addVAT() to it and then store the result into the output list, which has the exact same size. Congratulations, as we have just done, by hand, a Map on the input list of the method addVAT(). Let’s do it a second time.

Now we want to convert each amount into another currency using the currency rate, so we need a new method for that:

public double convertCurrency(double amount, double currencyRate){return amount / currencyRate;}

Now we can apply this method to each element in the list:

public List<Double> convertCurrency(List<Double> amounts, double currencyRate){
   final List<Double> amountsInCurrency = new ArrayList<Double>();
   for(double amount : amounts){
      amountsInCurrency.add(convertCurrency(amount, currencyRate));
   }
   return amountsInCurrency;
}

Notice how the two methods that accept a list are similar, except the method being called at step 2:

  1. create an output list,
  2. call the given method for each element from the input list and store the result into the output list
  3. return the output list.

You do that often in Java, and that’s exactly what the Map operator is: apply a given method someMethod(T):T to each element of a list<T>, which gives you another list<T> of the same size.

Functional languages recognize that this particular need (apply a method on each element of a collection) is very common so they encapsulate it directly into the built-in Map operator. This way, given the addVAT(double, double) method, we could directly write something like this using the Map operator:

List amountsWithVAT = map (addVAT, amounts, rate)

Yes the first parameter is a function, as functions are first-class citizens in functional languages so they can be passed as parameter. Using the Map operator is more concise and less error-prone than the for-loop, and the intent is also much more explicit, but we don’t have it in Java…

So the point of these examples is that you are already familiar, without even knowing, with a key concept of functional programming: the Map operator.

And now for the Fold operator

Coming back to the list of amounts, now we need to compute the total amount as the sum of each amount. Super-easy, let’s do that with a loop:

public double totalAmount(List<Double> amounts){
   double sum = 0;
   for(double amount : amounts){
      sum += amount;
   }
   return sum;
}

Basically we’ve just done a Fold over the list, using the function ‘+=’ to fold each element into one element, here a number, incrementally, one at a time. This is similar to the Map operator, except that the result is not a list but a single element, a scalar.

This is again the kind of code you commonly write in Java, and now you have a name for it in functional languages: “Fold” or “Reduce”. The Fold operator is usually recursive in functional languages, and we won’t describe it here. However we can achieve the same intent in an iterative form, using some mutable state to accumulate the result between iterations. In this approach, the Fold takes a method with internal mutable state that expects one element, e.g. someMethod(T), and applies it repeatedly to each element from the input list<T>, until we end up with one single element T, the result of the fold operation.

Typical functions used with Fold are summation, logical AND and OR, List.add() or List.addAll(), StringBuilder.append(), max or min etc.. The mindset with Fold is similar to aggregate functions in SQL.

Thinking in shapes

Thinking visually (with sloppy pictures), Map takes a list of size n and returns another list of the same size:

On the other hand, Fold takes a list of size n and returns a single element (scalar):

You may remember my previous articles on predicates, which are often used to filter collections into collections with less elements. In fact this filter operator is the third standard operator that complements Map and Fold in most functional languages.

Eclipse template

Since Map and Fold are quite common it makes sense to create Eclipse templates for them, e.g. for Map:

Getting closer to map and fold in Java

Map and Fold are constructs that expect a function as a parameter, and in Java the only way to pass a method is to wrap it into a interface.

In Apache Commons Collections, two interfaces are particularly interesting for our needs: Transformer, with one method transform(T):T, and Closure, with one single method execute(T):void. The class CollectionUtils offers the method collect(Iterator, Transformer) which is basically a poor-man Map operator for Java collections, and the method forAllDo() that can emulate the Fold operator using closures.

With Google Guava the class Iterables offers the static method transform(Iterable, Function) which is basically the Map operator.

List<Double> exVat = Arrays.asList(new Double[] { 99., 127., 35. });
 Iterable<Double> incVat = Iterables.transform(exVat, new Function<Double, Double>() {
   public Double apply(Double exVat) {
     return exVat * (1.196);
   }
 });
 System.out.println(incVat); //print [118.404, 151.892, 41.86]

A similar transform() method is also available on the classes Lists for Lists and Maps for Maps.

To emulate the Fold operator in Java, you can use a Closure interface, e.g. the Closure interface in Apache Commons Collection, with one single method with only one parameter, so you must keep the current -mutable- state internally, just like ‘+=’ does.

Unfortunately there is no Fold in Guava, though it is regularly asked for, and there even no closure-like function, but it is not hard to create your own,  for example, you can implement the grand total above with something like this:

// the closure interface with same input/output type
public interface Closure<T> {
 T execute(T value);
}

// an example of a concrete closure
public class SummingClosure implements Closure<Double> {
 private double sum = 0;

 public Double execute(Double amount) {
   sum += amount; // apply '+=' operator
   return sum; // return current accumulated value
 }
}

// the poor man Fold operator
public final static <T> T foreach(Iterable<T> list, Closure<T> closure) {
 T result = null;
 for (T t : list) {
   result = closure.execute(t);
 }
 return result;
}

@Test // example of use
public void testFold() throws Exception {
 SummingClosure closure = new SummingClosure();

 List<Double> exVat = Arrays.asList(new Double[] { 99., 127., 35. });
 Double result = foreach(exVat, closure);
 System.out.println(result); // print 261.0
}

Not only for collections: Fold over trees and other structures

The power of Map and Fold is not limited to simple collections, but can scale to any navigable structure, in particular trees and graphs.

Imagine a tree using a class Node with its children. It may be a good idea to code once the Depth-First and the Breadth-First searches (DFS & BFS) into two generic methods that accept a Closure as single parameter:

public class Node ...{
   ...
   public void dfs(Closure closure){...}
   public void bfs(Closure closure){...}
}

I have regularly used this technique in the past, and I can tell it can cut the size of your classes big time, with only one generic method instead of many similar-looking methods that would each redo their own tree traversal. More importantly, the traversal can be unit-tested on its own using a mock closure. Each closure can also be unit-tested independently, and all that just makes your life so much simpler.

A very similar idea can be realized with the Visitor pattern, and you are probably already familiar with it. I have seen many times in my code and in the code of several other teams that Visitors are well-suited to accumulate state during the traversal of the data structure. In this case the Visitor is just a special case of closure to be passed for use in the folding.

One word on Map-Reduce

You probably heard of the pattern Map-Reduce, and yes the words “Map” and “Reduce” in it refer to the same functional operators Map and Fold (aka Reduce) we’ve just seen. Even though the practical application is more sophisticated, it is easy to notice that Map is embarrassingly parallel, which helps a lot for parallel computations.

Read More

A touch of functional style in plain Java with predicates – Part 2

In the first part of this article we introduced predicates, which bring some of the benefits of functional programming to object-oriented languages such as Java, through a simple interface with one single method that returns true or false. In this second and last part, we’ll cover some more advanced notions to get the best out of your predicates.

Testing

One obvious case where predicates shine is testing. Whenever you need to test a method that mixes walking a data structure and some conditional logic, by using predicates you can test each half in isolation, walking the data structure first, then the conditional logic.

In a first step, you simply pass either the always-true or always-false predicate to the method to get rid of the conditional logic and to focus just on the correct walking on the data structure:

// check with the always-true predicate
final Iterable<PurchaseOrder> all = orders.selectOrders(Predicates.<PurchaseOrder> alwaysTrue());
assertEquals(2, Iterables.size(all));

// check with the always-false predicate
assertTrue(Iterables.isEmpty(orders.selectOrders(Predicates.<PurchaseOrder> alwaysFalse())));

In a second step, you just test each possible predicate separately.

final CustomerPredicate isForCustomer1 = new CustomerPredicate(CUSTOMER_1);
assertTrue(isForCustomer1.apply(ORDER_1)); // ORDER_1 is for CUSTOMER_1
assertFalse(isForCustomer1.apply(ORDER_2)); // ORDER_2 is for CUSTOMER_2

This example is simple but you get the idea. To test more complex logic, if testing each half of the feature is not enough you may create mock predicates, for example a predicate that returns true once, then always false later on. Forcing the predicate like that may considerably simplify your test set-up, thanks to the strict separation of concerns.

Predicates work so good for testing that if you tend to do some TDD, I mean if the way you can test influences the way you design, then as soon as you know predicates they will surely find their way into your design.

Explaining to the team

In the projects I’ve worked on, the team was not familiar with predicates at first. However this concept is easy and fun enough for everyone to get it quickly. In fact I’ve been surprised by how the idea of predicates spread naturally from the code I had written to the code of my colleagues, without much evangelism from me. I guess that the benefits of predicates speak for themselves. Having mature API’s from big names like Apache or Google also helps convince that it is serious stuff. And now with the functional programming hype, it should be even easier to sell!

Simple optimizations

This engine is so big, no optimization is required (Chicago Auto Show).

The usual optimizations are to make predicates immutable and stateless as much as possible to enable their sharing with no consideration of threading.  This enables using one single instance for the whole process (as a singleton, e.g. as static final constants). Most frequently used predicates that cannot be enumerated at compilation time may be cached at runtime if required. As usual, do it only if your profiler report really calls for it.

When possible a predicate object can pre-compute some of the calculations involved in its evaluation in its constructor (naturally thread-safe) or lazily.

A predicate is expected to be side-effect-free, in other words “read-only”: its execution should not cause any observable change to the system state. Some predicates must have some internal state, like a counter-based predicate used for paging, but they still must not change any state in the system they apply on. With internal state, they also cannot be shared, however they may be reused within their thread if they support reset between each successive use.

Fine-grained interfaces: a larger audience for your predicates

In large applications you find yourself writing very similar predicates for types totally different but that share a common property like being related to a Customer. For example in the administration page, you may want to filter logs by customer; in the CRM page you want to filter complaints by customer.

For each such type X you’d need yet another CustomerXPredicate to filter it by customer. But since each X is related to a customer in some way, we can factor that out (Extract Interface in Eclipse) into an interface CustomerSpecific with one method:

public interface CustomerSpecific {
   Customer getCustomer();
}

This fine-grained interface reminds me of traits in some languages, except it has no reusable implementation. It could also be seen as a way to introduce a touch of dynamic typing within statically typed languages, as it enables calling indifferently any object with a getCustomer() method. Of course our class PurchaseOrder now implements this interface.

Once we have this interface CustomerSpecific, we can define predicates on it rather than on each particular type as we did before. This helps leverage just a few predicates throughout a large project. In this case, the predicate CustomerPredicate is co-located with the interface CustomerSpecific it operates on, and it has a generic type CustomerSpecific:

public final class CustomerPredicate implements Predicate<CustomerSpecific>, CustomerSpecific {
  private final Customer customer;
  // valued constructor omitted for clarity
  public Customer getCustomer() {
    return customer;
  }
  public boolean apply(CustomerSpecific specific) {
    return specific.getCustomer().equals(customer);
  }
}

Notice that the predicate can itself implement the interface CustomerSpecific, hence could even evaluate itself!

When using trait-like interfaces like that, you must take care of the generics and change a bit the method that expects a Predicate<PurchaseOrder> in the class PurchaseOrders, so that it also accepts any predicate on a supertype of PurchaseOrder:

public Iterable<PurchaseOrder> selectOrders(Predicate<? super PurchaseOrder> condition) {
    return Iterables.filter(orders, condition);
}

Specification in Domain-Driven Design

Eric Evans and Martin Fowler wrote together the pattern Specification, which is clearly a predicate. Actually the word “predicate” is the word used in logic programming, and the pattern Specification was written to explain how we can borrow some of the power of logic programming into our object-oriented languages.

In the book Domain-Driven Design, Eric Evans details this pattern and gives several examples of Specifications which all express parts of the domain. Just like this book describes a Policy pattern that is nothing but the Strategy pattern when applied to the domain, in some sense the Specification pattern may be considered a version of predicate dedicated to the domain aspects, with the additional intent to clearly mark and identify the business rules.

As a remark, the method name suggested in the Specification pattern is: isSatisfiedBy(T): boolean, which emphasises a focus on the domain constraints. As we’ve seen before with predicates, atoms of business logic encapsulated into Specification objects can be recombined using boolean logic (or, and, not, any, all), as in the Interpreter pattern.

The book also describes some more advanced techniques such as optimization when querying a database or a repository, and subsumption.

Optimisations when querying

The following are optimization tricks, and I’m not sure you will ever need them. But this is true that predicates are quite dumb when it comes to filtering datasets: they must be evaluated on just each element in a set, which may cause performance problems for huge sets. If storing elements in a database and given a predicate, retrieving every element just to filter them one after another through the predicate does not sound exactly a right idea for large sets…

When you hit performance issues, you start the profiler and find the bottlenecks. Now if calling a predicate very often to filter elements out of a data structure is a bottleneck, then how do you fix that?

One way is to get rid of the full predicate thing, and to go back to hard-coded, more error-prone, repetitive and less-testable code. I always resist this approach as long as I can find better alternatives to optimize the predicates, and there are many.

First, have a deeper look at how the code is being used. In the spirit of Domain-Driven Design, looking at the domain for insights should be systematic whenever a question occurs.

Very often there are clear patterns of use in a system. Though statistical, they offer great opportunities for optimisation. For example in our PurchaseOrders class, retrieving every PENDING order may be used much more frequently than every other case, because that’s how it makes sense from a business perspective, in our imaginary example.

Friend Complicity

Weird complicity (Maeght foundation)

Based on the usage pattern you may code alternate implementations that are specifically optimised for it. In our example of pending orders being frequently queried, we would code an alternate implementation FastPurchaseOrder, that makes use of some pre-computed data structure to keep the pending orders ready for quick access.

Now, in order to benefit from this alternate implementation, you may be tempted to change its interface to add a dedicated method, e.g. selectPendingOrders(). Remember that before you only had a generic selectOrders(Predicate) method. Adding the extra method may be alright in some cases, but may raise several concerns: you must implement this extra method in every other implementation too, and the extra method may be too specific for a particular use-case hence may not fit well on the interface.

A trick for using the internal optimization through the exact same method that only expects predicates is just to make the implementation recognize the predicate it is related to. I call that “Friend Complicity“, in reference to the friend keyword in C++.

/** Optimization method: pre-computed list of pending orders */
private Iterable<PurchaseOrder> selectPendingOrders() {
  // ... optimized stuff...
}

public Iterable<PurchaseOrder> selectOrders(Predicate<? super PurchaseOrder> condition) {
  // internal complicity here: recognize friend class to enable optimization
  if (condition instanceof PendingOrderPredicate) {
     return selectPendingOrders();// faster way
  }
  // otherwise, back to the usual case
  return Iterables.filter(orders, condition);
}

It’s clear that it increases the coupling between two implementation classes that should otherwise ignore each other. Also it only helps with performance if given the “friend” predicate directly, with no decorator or composite around.

What’s really important with Friend Complicity is to make sure that the behaviour of the method is never compromised, the contract of the interface must be met at all times, with or without the optimisation, it’s just that the performance improvement may happen, or not. Also keep in mind that you may want to switch back to the unoptimized implementation one day.

SQL-compromised

If the orders are actually stored in a database, then SQL can be used to query them quickly. By the way, you’ve probably noticed that the very concept of predicate is exactly what you put after the WHERE clause in a SQL query.

Ron Arad designed a chair that encompasses another chair: this is subsumption

A first and simple way to still use predicate yet improve performance is for some predicates to implement an additional interface SqlAware, with a method asSQL(): String that returns the exact SQL query corresponding for the evaluation of the predicate itself. When the predicate is used against a database-backed repository, the repository would call this method instead of the usual evaluate(Predicate) or apply(Predicate) method, and would then query the database with the returned query.

I call that approach SQL-compromised as the predicate is now polluted with database-specific details it should ignore more often than not.

Alternatives to using SQL directly include the use of stored procedures or named queries: the predicate has to provide the name of the query and all its parameters. Double-dispatch between the repository and the predicate passed to it is also an alternative: the repository calls the predicate on its additional method selectElements(this) that itself calls back the right pre-selection method findByState(state): Collection on the repository; the predicate then applies its own filtering on the returned set and returns the final filtered set.

Subsumption

Subsumption is a logic concept to express a relation of one concept that encompasses another, such as “red, green, and yellow are subsumed under the term color” (Merriam-Webster). Subsumption between predicates can be a very powerful concept to implement in your code.

Let’s take the example of an application that broadcasts stock quotes. When registering we must declare which quotes we are interested in observing. We can do that by simply passing a predicate on stocks that only evaluates true for the stocks we’re interested in:

public final class StockPredicate implements Predicate<String> {
   private final Set<String> tickers;
   // Constructors omitted for clarity

   public boolean apply(String ticker) {
     return tickers.contains(ticker);
   }
 }

Now we assume that the application already broadcasts standard sets of popular tickers on messaging topics, and each topic has its own predicates; if it could detect that the predicate we want to use is “included”, or subsumed in one of the standard predicates, we could just subscribe to it and save computation. In our case this subsumption is rather easy, by just adding an additional method on our predicates:

 public boolean encompasses(StockPredicate predicate) {
   return tickers.containsAll(predicate.tickers);
 }Subsumption is all about evaluating another predicate for "containment". This is easy when your predicates are based on sets, as in the example, or when they are based on intervals of numbers or dates. Otherwise You may have to resort to tricks similar to Friend Complicity, i.e. recognizing the other predicate to decide if it is subsumed or not, in a case-by-case fashion.

Overall, remember that subsumption is hard to implement in the general case, but even partial subsumption can be very valuable, so it is an important tool in your toolbox.

Conclusion

Predicates are fun, and can enhance both your code and the way you think about it!

Cheers,

The single source file for this part is available for download cyriux_predicates_part2.zip (fixed broken link)

Read More

A touch of functional style in plain Java with predicates – Part 1

You keep hearing about functional programming that is going to take over the world, and you are still stuck to plain Java? Fear not, since you can already add a touch of functional style into your daily Java. In addition, it’s fun, saves you many lines of code and leads to fewer bugs.

What is a predicate?

I actually fell in love with predicates when I first discovered Apache Commons Collections, long ago when I was coding in Java 1.4. A predicate in this API is nothing but a Java interface with only one method:

evaluate(Object object): boolean

That’s it, it just takes some object and returns true or false. A more recent equivalent of Apache Commons Collections is Google Guava, with an Apache License 2.0. It defines a Predicate interface with one single method using a generic parameter:

apply(T input): boolean

It is that simple. To use predicates in your application you just have to implement this interface with your own logic in its single method apply(something).

A simple example

As an early example, imagine you have a list orders of PurchaseOrder objects, each with a date, a Customer and a state. The various use-cases will probably require that you find out every order for this customer, or every pending, shipped or delivered order, or every order done since last hour.  Of course you can do that with foreach loops and a if inside, in that fashion:

//List<PurchaseOrder> orders...

public List<PurchaseOrder> listOrdersByCustomer(Customer customer) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (order.getCustomer().equals(customer)) {
      selection.add(order);
    }
  }
  return selection;
}

And again for each case:

public List<PurchaseOrder> listRecentOrders(Date fromDate) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (order.getDate().after(fromDate)) {
      selection.add(order);
    }
  }
  return selection;
}

The repetition is quite obvious: each method is the same except for the condition inside the if clause, emphasized in bold here. The idea of using predicates is simply to replace the hard-coded condition inside the if clause by a call to a predicate, which then becomes a parameter. This means you can write only one method, taking a predicate as a parameter, and you can still cover all your use-cases, and even already support use-cases you do not know yet:

public List<PurchaseOrder> listOrders(Predicate<PurchaseOrder> condition) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (condition.apply(order)) {
      selection.add(order);
    }
  }
  return selection;
}

Each particular predicate can be defined as a standalone class, if used at several places, or as an anonymous class:

final Customer customer = new Customer("BruceWaineCorp");
final Predicate<PurchaseOrder> condition = new Predicate<PurchaseOrder>() {
  public boolean apply(PurchaseOrder order) {
    return order.getCustomer().equals(customer);
  }
};

Your friends that use real functional programming languages (Scala, Clojure, Haskell etc.) will comment that the code above is awfully verbose to do something very common, and I have to agree. However we are used to that verbosity in the Java syntax and we have powerful tools (auto-completion, refactoring) to accommodate it. And our projects probably cannot switch to another syntax overnight anyway.

Predicates are collections best friends

Didn't find any related picture, so here's an unrelated picture from my library

Coming back to our example, we wrote a foreach loop only once to cover every use-case, and we were happy with that factoring out. However your friends doing functional programming “for real” can still laugh at this loop you had to write yourself. Luckily, both API from Apache or Google also provide all the goodies you may expect, in particular a class similar to java.util.Collections, hence named Collections2 (not a very original name).

This class provides a method filter() that does something similar to what we had written before, so we can now rewrite our method with no loop at all:

public Collection<PurchaseOrder> selectOrders(Predicate<PurchaseOrder> condition) {
  return Collections2.filter(orders, condition);
}

In fact, this method returns a filtered view:

The returned collection is a live view of unfiltered (the input collection); changes to one affect the other.

This also means that less memory is used, since there is no actual copy from the initial collection unfiltered to the actual returned collection filtered.

On a similar approach, given an iterator, you could ask for a filtered iterator on top of it (Decorator pattern) that only gives you the elements selected by your predicate:

Iterator filteredIterator = Iterators.filter(unfilteredIterator, condition);

Since Java 5 the Iterable interface comes very handy for use with the foreach loop, so we’d prefer indeed use the following expression:

public Iterable<PurchaseOrder> selectOrders(Predicate<PurchaseOrder> condition) {
  return Iterables.filter(orders, condition);
}

// you can directly use it in a foreach loop, and it reads well:
for (PurchaseOrder order : orders.selectOrders(condition)) {
  //...
}

Ready-made predicates

To use predicates, you could simply define your own interface Predicate, or one for each type parameter you need in your application. This is possible, however the good thing in using a standard Predicate interface from an API such as Guava or Commons Collections is that the API brings plenty of excellent building blocks to combine with your own predicate implementations.

First you may not even have to implement your own predicate at all. If all you need is a condition on whether an object is equal to another, or is not-null, then you can simply ask for the predicate:

// gives you a predicate that checks if an integer is zero
Predicate<Integer> isZero = Predicates.equalTo(0);
// gives a predicate that checks for non null objects
Predicate<String> isNotNull = Predicates.notNull();
// gives a predicate that checks for objects that are instanceof the given Class
Predicate<Object> isString = Predicates.instanceOf(String.class);

Given a predicate, you can inverse it (true becomes false and the other way round):

Predicates.not(predicate);

Combine several predicates using boolean operators AND, OR:

Predicates.and(predicate1, predicate2);
Predicates.or(predicate1, predicate2);
// gives you a predicate that checks for either zero or null
Predicate<Integer> isNullOrZero = Predicates.or(isZero, Predicates.isNull());

Of course you also have the special predicates that always return true or false, which are really, really useful, as we’ll see later for testing:

Predicates.alwaysTrue();
Predicates.alwaysFalse();

Where to locate your predicates

I often used to make anonymous predicates at first, however they always ended up being used more often so were often promoted to actual classes, nested or not.

By the way, where to locate these predicates? Following Robert C. Martin and his Common Closure Principle (CCP) :

Classes that change together, belong together

Because predicates manipulates objects of a certain type, I like to co-locate them close to the type they take as parameter. For example, the classes CustomerOrderPredicate, PendingOrderPredicate and RecentOrderPredicate should reside in the same package than the class PurchaseOrder that they evaluate, or in a sub-package if you have a lot of them. Another option would be to define them nested within the type itself. Obviously, the predicates are quite coupled to the objects they operate on.

Resources

Here are the source files for the examples in this article: cyriux_predicates_part1 (zip)

In the next part, we’ll have a look at how predicates simplify testing, how they relate to Specifications in Domain-Driven Design, and some additional stuff to get the best out of your predicates.

Read More

What next language will you choose?

Now that enterprises have chosen stable platforms (JVM and .Net), on top of which we can choose a syntax out of many available, which language will you pick up as your next favourite?

New languages to have a look at (my own selection)

Based on what I read everyday in various blogs, I arbitrarily reduced the list of candidates to just a few language that I believe are rising and promising:

  • Scala
  • F#
  • Clojure
  • Ruby
  • Groovy

Of course each language has its own advantages, and we should definitely take a look at several of them, not just one. But the popularity of the languages is also important to have a chance of using them in your day work.

Growth rates stats

In order to get some facts about the trends of these new programming languages in the real world, Google trends is your friend. Here is the graph of rate of growth (NOT absolute numbers), worldwide:

I’m impressed at how Clojure is taking off so brutally… In absolute terms, Ruby comes first, Groovy second.

So that was for the search queries on Google. What about the job posts? Again these are growth rates, not absolute numbers, and with a focus on the US.

Again, the growth of Clojure is impressive, even though it remains very small in absolute terms, where Ruby comes first, followed by Groovy.

Popularity index

So far the charts only showed how new languages are progressing compared to each other.

To get an indication of the actual present popularity of each language, the usual place to go is at the TIOBE indices (last published June 2010):

The TIOBE Programming Community index gives an indication of the popularity of programming languages. The index is updated once a month. The ratings are based on the number of skilled engineers world-wide, courses and third party vendors. The popular search engines Google, MSN, Yahoo!, Wikipedia and YouTube are used to calculate the ratings.

In short:

  • Ruby is ranked 12th; Ruby is kinda mainstream already
  • LISP/Scheme/Clojure are ranked together 16th, almost the same rank than in 2005 when Clojure did not exist yet.
  • Scala, Groovy, F#/Caml are ranked 43, 44 and 45th respectively

Conclusion

Except Ruby, the new languages Scala, Groovy, F# and Clojure are not yet well established, but they do progress quickly, especially Clojure then Scala.

In absolute terms, and within my selection of languages, Groovy is the more popular after Ruby, followed by Scala. Clojure and F# are still far behind.

I have a strong feeling that the time has come for developers to mix alternate syntax in addition to their legacy language (Java or C#), still on top of the same platform, something also called polyglot programming. It’s also time for the ideas of functional programming to become more popular.

In practice, these new languages are more likely to be introduced initially for automated testing and build systems, before being used for production code, especially with high productivity web frameworks that leverage their distinct strengths.

So which one to choose? If you’re already on the JVM, why not choose Groovy or Scala first, then Clojure to get a grasp on something different. If you’re on .Net, why not choose Scala and then F#. Ad if you were to choose Ruby, I guess you would have done it already.

Read More

The String Obsession anti-pattern

This anti-pattern is about Using String instances all the time instead of more powerful objects. This is a special case of primitive obsession.

Examples

Here are some flavours of this anti-pattern:

  • String as keys in a Map: I have seen calls to toString() on objects just to use their String representation as a key in a Map: map.put(myPerson.toString(), “someValue”).
  • String as enum (“In Progress” , “Done”, “Error”), with tests using equal() to trigger the corresponding logic.
  • Array or Collection of String to mimic a data structure, in which case the ordering matters: {“firstname”, “lastname”}.
  • String with a special format that makes it parsable: “value1_value2”, but used all over the place as is, with parsing code all over the place.
  • String in XML format used everywhere as is, rather than a corresponding tree of objects (a DOM); again there is a lot of parsing everywhere.

The problem

string_obsessionWhy not use “real” Java objects instead of Strings? Value Objects would be much easier to use, and would be more efficient in many cases, whereas String obsession comes with serious troubles:

  • Repeated parsing is CPU-intensive and allocates a lot of new objects each time
  • The code to parse and interpret a String must be spread all over the code base, leading to clutter and plenty of quasi-duplication
  • When used to represent meaningful domain concepts, Strings have no method dedicated to the business. The domain operations must be managed elsewhere and selected with conditionals; on the contrary, using value objects makes your life easier everytime you use them.
  • String values also need to be parsed to be validated. String values do not catch potential issues early, they will only be discovered when really used; on the other hand, value objects can be validated and can catch issues as soon as they are constructed

Solution

Use Value Objects, that are immutable, with efficient and safe equals() and hashcode(), the convenient toString() and every useful member method that can make them easy to use and. On inputs, convert the incoming format (e.g. String) into the corresponding full-featured object as soon as possible, and use the object as much as possible. In practice the objects can be used everywhere, everytime, as long as they remain within the same VM.

Why the String obsession?

Sure, there are some reasons why some people favour Strings rather than creating additional objects:

  • String are already available in i/o, hence you may consider it is memory efficient to just keep them
  • String are immutable, behave well as keys, and are easy to inspect in debug
  • You may assume that i/o represent the majority of cases in the application
  • You may consider that creating additional value objects mean more code therefore more effort and more maintenance

I strongly believe all these reasons are short-sighted, if not wrong. The burden of having to interpret the business meaning of a String each time is definitely a loss compared to creating a object with a meaning as soon as possible in i/o and use the object and its methods thereafter. Even for persistence and on-screen display, having an object with methods and accessors make our life easier than just using Strings!

Read More

Principles for using annotations

Deciding where and how to place the annotations is not innocent. The last thing we want is to create extra maintenance effort because of the annotations. In other words, we want annotations that are stable, or that change for the same reasons and at the same time than the elements they annotate. This article suggests some good practices on how to design annotations.

Annotations are location-based

annotations
A special kind of wall annotation

Language annotations or even good-old xDoclet tags enable to augment program elements with additional semantics, which can be used to configure tools, frameworks or containers.

Configuration is now increasingly done through annotations spread all over the project elements. The key advantage is that the location of the annotation directly references the program element (interface, class etc.), as opposed to configuration files that must reference program elements using awkward and error-prone qualified names: “com.mycompany.somepackage.MyClass”, that are also fragile to refactoring.

For example, we can annotate an entity to declare how it must be persisted, we can annotate a class to declare how it must be instantiated by a Dependency Injection framework, and we can annotate test methods to declare their purpose.

If not placed and thought carefully, annotations can make your code harder to maintain. This happens when annotations are placed at the “wrong” place, or when they introduce undesirable coupling, as we will see.

Dependencies still matter

The question of coupling between elements of the code base is also relevant for annotations. That the coupling is done via an annotation rather than plain code does not make it more acceptable.

We want to group together things that change together. As a consequence, put your annotations on the elements that change with the annotations.

In particular, when the annotation is used to declare a dependency:

Only annotate the dependent element, not the element depended on

If you use Dependency Injection and you want the class MyServiceImpl to be injected everywhere the interface MyService is used, then Guice offers the annotation @ImplementedBy:

@ImplementedBy(MyServiceImpl.class)
interface MyService {... }

This annotation is a direct violation of the advice above, since it makes a pure abstraction (an interface) aware of an implementation, whereas the regular dependency should be the other way round: only the implementation must depend on the interface.

I must however acknowledge that the annotation @ImplementedBy is quite convenient for unit tests anyway, to declare a default implementation for the interface. And it was done just for that, as described in the Guice documentation along with a warning:

Use @ImplementedBy carefully; it adds a compile-time dependency from the interface to its implementation.

Favor intrinsic annotations

annotation2
Another annotation on the wall in Paris

If you want to declare that a service is stateless, you cannot get it wrong: just put the annotation @Stateless on its interface. This is straightforward because being stateless is a truly intrinsic property. It also makes perfect sense to annotate a method argument with the @Nullable annotation, as the capability to expect null or not is really intrinsic to the method.

On the other hand, a service interface does not really care about how it is called. It can be called by another object (local call) or remotely, through some remote proxy. The object is not intrinsically local or remote in itself.

The point is that the decision to consume the service locally or remotely does not belong to the service, in itself, but depends on each client. It actually belongs to each use-case considered.

Said another way, specifying @Remotable or @Local directly on the service would require the developer of the service to guess how it will be used!

Intrinsic properties really belong to the element and therefore are stable, as opposed to use-case-specific properties that vary with the particular case of use. Hence, if we want stable annotations:

Only annotate an element about its intrinsic properties, and avoid annotating about use-case-specific properties.

Annotations as pointcuts

Let’s consider an example of  an accounting service in a bank. Only selected categories of staff can access this service. We can use annotations to declare its security configuration:

@RolesAllowed({"auditor", "bankmanager", "admin"})

The problem with that approach is that it couples the service directly to the user roles defined elsewhere; as a consequence, if we want to change the user roles (we now need to add the user role “externalauditor”), we will have to review every security annotation and change them. On the other hand, if we want to change the access policy (which happen every time a new senior management comes into place), we will also have to change annotations all over the code. How can we improve that?

We can improve the situation by going back to the business analysis on the topic and separate what’s intrinsic and what’s not. In other words, we want to find out how did a BA came up with the security roles for the service.

Rather than specifying the need for security in terms of allowed user roles, we can instead declare the facts: the service is “sensitive” and is about “accounting”:

@Domain(Accounting)
@Confidentiality(Sensitive)
And now a beautiful car annotation
And now a beautiful car annotation

Then we can define expressions that use the declared annotations (which are now stable because they are intrinsic) to select elements (here services) and associate them to allowed user roles. These rules should be defined outside of the elements they apply to, typically in configuration files.

Thanks to the annotations that already define half of the security knowledge, expressing the rules becomes much simpler that doing it method by method. So next time the senior management changes and decides that from now on, “every service that is both Confidentiality(Sensitive) and Domain(Accounting) is only allowed to corporate-officer roles”, you just have to update a few rules expressed in terms of domain and confidentiality level, rather than by listing many method.

The mindset is very similar to AOP where we first define pointcuts, and then attach advices to them. Here we use annotations as an alternative way to declare the pointcuts.

Conclusion

Annotations are very efficient to declare properties about program elements directly on the elements. They are robust versus refactoring and are easier to use than specifying long qualified names in XML files.

To get the best of annotations, we still need to consider the coupling they can introduce, in particular with respect to dependencies. If a class should not know about another, its annotations should not either.

Annotations are much more stable (less likely to change) when they only relate to intrinsic properties of the elements they are located on. When we need to configure cross-cutting concerns (security, transactions etc.) annotations can be used to declare the half of the knowledge that is really intrinsic to the elements, in the same spirit than pointcuts in AOP.

All that leads to the acknowledgement that even though annotations can be of huge value, in practice there is still a case for configuration files to complement them. In this approach, annotations on elements declare what belongs to the elements, while each use-case-specific configuration file makes use of the annotations and as a result is much simpler.

Read More

Design your value objects carefully and your life will get better!

Many concepts look obvious because they are used often, not because they are really simple.

Small quantities that we encounter all the time in most projects, such as date, time, price, quantity of items, a min and a max… hardly are a subject of interest from developers “as it is obvious that we can represent them with primitives”.

Yes you can, but you should not.

I have experienced through different projects that every decision to use a new value value object or to improve an existing value object, even very small and trivial, was probably the design decision that yields the most benefit in every further development, maintenance or evolution. This comes from the simple fact that using something simpler “all the time” just makes our life simpler all the time.

We often dont pay attention to the usual chair, however it was carefully designed.
We often dont pay attention to the usual chair, however it was carefully designed.

As a reminder, value objects are objects with an identity solely defined by their data. Using value objects brings many typical benefits of Object Orientation, such as:

  1. One value object often gathers several primitives together, and one thing is always simpler that a few things
  2. The value object can be fully unit-tested: this gives a tremendous return, because once it is trusted there will simply be no bug about it any more
  3. Methods that encapsulate even the simplest expression with a good name tells exactly what it does, instead of having to interpret the expression every time (think of isEmpty() instead of “size() == 0”, it does save some time each time)
  4. The value object is a place to put documentation such as the corresponding unit and other conventions, and have them enforced. Static creators makes creation easy, self-explanatory, can enforce the validations and help select the right unit (percent, meter, currency)
  5. The method toString() can just tell in a pretty-formatted fashion what it is, and this is precious!
  6. The value object can also define various abilities, such as being Comparable, being immutable, be reused in a Flyweight fashion etc.

And of course, when needed it can be changed to evolve, without breaking the code using it. I could not imagine a team claiming to be really agile without the ability at least to evolve its most basic concepts in an easy way.

The first time it takes some extra time to build a value object, as opposed to jumping to a few primitives; but being lazy actually means doing less in the long run, not just right now.

Read More

Patterns as another kind of types

Patterns represent a couple (intent, solution), where the intent matters most. Based on that intents, that can be generic or specialized, I propose to consider patterns like types in languages with strong typing, for the compiler to enforce their constraints.

Declaring patterns: what for?

Consider the very simple Quantity pattern from Analysis Patterns (Fowler):

Represent dimensioned values with both their amount and their unit

By simply declaring this pattern, we immediately express many things:

  1. We wish to represent a quantity with its unit
  2. Quantity is a special kind of ValueObject (as in Fowler or DDD), hence:
    1. It is Immutable, therefore it has no setter, only a valued constructor
    2. Identity is based solely on its values, two value objects are equal if they share the same values
  3. In Java, the hashcode() and equals() methods must be implemented solely on the values of the immutable fields, and the hashcode can be cached internally; the toString() result can be cached as well
  4. In the spirit of Domain Driven Design, it also probably follows Closure of Operation and Side -Effect-Free Functions
  5. It must not depend on any heavyweight dependencies such as middleware, database or gui classes
  6. It must not depend on Entities and Services
Strangely typed furniture
Strangely typed furniture

If we had an explicit way to mark the use of the pattern in the code, and if the compiler was able to know about this pattern, then it could enforce all the above. Moreover, it could also generate corresponding unit tests to assert the dynamic and runtime aspects constrained by the pattern.

This approach can be considered similar to the typing used in programming languages, where the compiler can enforce many constraints according to the type declarations given by the developer.

In practice

In current languages there is no explicit way to declare the use of patterns in the source code. The reader of the source must pay attention to various hints and compare to his/her prior knowledge to decide whether or not a pattern is used. Typical hints include naming conventions, where the class or interface names ends with the name of the pattern participant: an interface TradeFactory is probably a Factory that creates Trade instances.

Since many years I have been using a custom Javadoc tag @pattern to explicitly mark the patterns I use in my source code, hoping that a tool could use it in the future. Over time I have noticed several benefits from doing that, the biggest one being that I am forced to be 100% clear about what I want. Many times I noticed that asking myself to explicitly declare in patterns what I was doing required me to think deeper. Though I know what I am doing, when trying to express that intent using patterns from the books I am forced to clarify my intent to decide which exact pattern I am applying.  This is fully in the spirit of the Pragmatic Programmer‘s “Programming by Coincidence“, and also in the spirit of Test Driven Development where the unit test is forcing you to clarify what you really want.

In other words, we could call that Pattern Driven Development. PDD, yet another acronym!

Yet another strangely typed object (the disco vaccum cleaner)
Yet another strangely typed object (the disco vaccum cleaner)

Discussion

It may not sound intuitive that there exists a documented pattern for each and every possible design decision, but I can confirm that the panel of existing patterns is very wide and already covers a lot of common design decisions. Such patterns are far from sophisticated solutions or tricks, they usually just name intents and known practices. But this is exactly what we need. Dirk Riehle, in a recent paper entitled Design Pattern Density Defined, found that in the case of open-source frameworks, and considering only classic design patterns, the density of such patterns was between 40 and 70%, where density is defined by “The design pattern density of an object-oriented framework is the percentage of its collaborations that are design pattern instances.

In addition, it becomes increasingly common for software teams to use custom patterns as a convenient way of documenting their common design practices. Actually this is how the pattern story began when Kent Beck and Ralph Johnson decided to document the design of HotDraw in the now classic paper: Patterns generate architectures. To illustrate that, the Drupal team is using custom patterns in addition to standard patterns to document their design in an efficient way, as described in the program of the DrupalCon Paris 2009 conference:

This session will cover the how and why of design patterns, and review the most common patterns used in Drupal as well as a number of common patterns we could be using. Some are Drupal-specific patterns and some more more general software design patterns.

Another major example of patterns dedicated for a particular project is the the Eclipse IDE. Erich Gamma, the main Eclipse designer -one of the Gang of Four- wrote the book Contributing to Eclipse: principles, patterns, and plug-ins (together with Kent Beck) where Eclipse-specific and standard patterns are used together as the primary mean of documenting the design.

Conclusion

Patterns are signs to denote intents, and I propose to promote the use of patterns (patterns already existing or to be defined in the context of a particular project) to trace these intents explicitly, and to enable tools to enforce pattern-specific constraints.

Pictures taken at la Biennale Internationale Design, Saint Etienne 2008

Read More