A touch of functional style in plain Java with predicates – Part 2

In the first part of this article we introduced predicates, which bring some of the benefits of functional programming to object-oriented languages such as Java, through a simple interface with one single method that returns true or false. In this second and last part, we’ll cover some more advanced notions to get the best out of your predicates.


One obvious case where predicates shine is testing. Whenever you need to test a method that mixes walking a data structure and some conditional logic, by using predicates you can test each half in isolation, walking the data structure first, then the conditional logic.

In a first step, you simply pass either the always-true or always-false predicate to the method to get rid of the conditional logic and to focus just on the correct walking on the data structure:

// check with the always-true predicate
final Iterable<PurchaseOrder> all = orders.selectOrders(Predicates.<PurchaseOrder> alwaysTrue());
assertEquals(2, Iterables.size(all));

// check with the always-false predicate
assertTrue(Iterables.isEmpty(orders.selectOrders(Predicates.<PurchaseOrder> alwaysFalse())));

In a second step, you just test each possible predicate separately.

final CustomerPredicate isForCustomer1 = new CustomerPredicate(CUSTOMER_1);
assertTrue(isForCustomer1.apply(ORDER_1)); // ORDER_1 is for CUSTOMER_1
assertFalse(isForCustomer1.apply(ORDER_2)); // ORDER_2 is for CUSTOMER_2

This example is simple but you get the idea. To test more complex logic, if testing each half of the feature is not enough you may create mock predicates, for example a predicate that returns true once, then always false later on. Forcing the predicate like that may considerably simplify your test set-up, thanks to the strict separation of concerns.

Predicates work so good for testing that if you tend to do some TDD, I mean if the way you can test influences the way you design, then as soon as you know predicates they will surely find their way into your design.

Explaining to the team

In the projects I’ve worked on, the team was not familiar with predicates at first. However this concept is easy and fun enough for everyone to get it quickly. In fact I’ve been surprised by how the idea of predicates spread naturally from the code I had written to the code of my colleagues, without much evangelism from me. I guess that the benefits of predicates speak for themselves. Having mature API’s from big names like Apache or Google also helps convince that it is serious stuff. And now with the functional programming hype, it should be even easier to sell!

Simple optimizations

This engine is so big, no optimization is required (Chicago Auto Show).

The usual optimizations are to make predicates immutable and stateless as much as possible to enable their sharing with no consideration of threading.  This enables using one single instance for the whole process (as a singleton, e.g. as static final constants). Most frequently used predicates that cannot be enumerated at compilation time may be cached at runtime if required. As usual, do it only if your profiler report really calls for it.

When possible a predicate object can pre-compute some of the calculations involved in its evaluation in its constructor (naturally thread-safe) or lazily.

A predicate is expected to be side-effect-free, in other words “read-only”: its execution should not cause any observable change to the system state. Some predicates must have some internal state, like a counter-based predicate used for paging, but they still must not change any state in the system they apply on. With internal state, they also cannot be shared, however they may be reused within their thread if they support reset between each successive use.

Fine-grained interfaces: a larger audience for your predicates

In large applications you find yourself writing very similar predicates for types totally different but that share a common property like being related to a Customer. For example in the administration page, you may want to filter logs by customer; in the CRM page you want to filter complaints by customer.

For each such type X you’d need yet another CustomerXPredicate to filter it by customer. But since each X is related to a customer in some way, we can factor that out (Extract Interface in Eclipse) into an interface CustomerSpecific with one method:

public interface CustomerSpecific {
   Customer getCustomer();

This fine-grained interface reminds me of traits in some languages, except it has no reusable implementation. It could also be seen as a way to introduce a touch of dynamic typing within statically typed languages, as it enables calling indifferently any object with a getCustomer() method. Of course our class PurchaseOrder now implements this interface.

Once we have this interface CustomerSpecific, we can define predicates on it rather than on each particular type as we did before. This helps leverage just a few predicates throughout a large project. In this case, the predicate CustomerPredicate is co-located with the interface CustomerSpecific it operates on, and it has a generic type CustomerSpecific:

public final class CustomerPredicate implements Predicate<CustomerSpecific>, CustomerSpecific {
  private final Customer customer;
  // valued constructor omitted for clarity
  public Customer getCustomer() {
    return customer;
  public boolean apply(CustomerSpecific specific) {
    return specific.getCustomer().equals(customer);

Notice that the predicate can itself implement the interface CustomerSpecific, hence could even evaluate itself!

When using trait-like interfaces like that, you must take care of the generics and change a bit the method that expects a Predicate<PurchaseOrder> in the class PurchaseOrders, so that it also accepts any predicate on a supertype of PurchaseOrder:

public Iterable<PurchaseOrder> selectOrders(Predicate<? super PurchaseOrder> condition) {
    return Iterables.filter(orders, condition);

Specification in Domain-Driven Design

Eric Evans and Martin Fowler wrote together the pattern Specification, which is clearly a predicate. Actually the word “predicate” is the word used in logic programming, and the pattern Specification was written to explain how we can borrow some of the power of logic programming into our object-oriented languages.

In the book Domain-Driven Design, Eric Evans details this pattern and gives several examples of Specifications which all express parts of the domain. Just like this book describes a Policy pattern that is nothing but the Strategy pattern when applied to the domain, in some sense the Specification pattern may be considered a version of predicate dedicated to the domain aspects, with the additional intent to clearly mark and identify the business rules.

As a remark, the method name suggested in the Specification pattern is: isSatisfiedBy(T): boolean, which emphasises a focus on the domain constraints. As we’ve seen before with predicates, atoms of business logic encapsulated into Specification objects can be recombined using boolean logic (or, and, not, any, all), as in the Interpreter pattern.

The book also describes some more advanced techniques such as optimization when querying a database or a repository, and subsumption.

Optimisations when querying

The following are optimization tricks, and I’m not sure you will ever need them. But this is true that predicates are quite dumb when it comes to filtering datasets: they must be evaluated on just each element in a set, which may cause performance problems for huge sets. If storing elements in a database and given a predicate, retrieving every element just to filter them one after another through the predicate does not sound exactly a right idea for large sets…

When you hit performance issues, you start the profiler and find the bottlenecks. Now if calling a predicate very often to filter elements out of a data structure is a bottleneck, then how do you fix that?

One way is to get rid of the full predicate thing, and to go back to hard-coded, more error-prone, repetitive and less-testable code. I always resist this approach as long as I can find better alternatives to optimize the predicates, and there are many.

First, have a deeper look at how the code is being used. In the spirit of Domain-Driven Design, looking at the domain for insights should be systematic whenever a question occurs.

Very often there are clear patterns of use in a system. Though statistical, they offer great opportunities for optimisation. For example in our PurchaseOrders class, retrieving every PENDING order may be used much more frequently than every other case, because that’s how it makes sense from a business perspective, in our imaginary example.

Friend Complicity

Weird complicity (Maeght foundation)

Based on the usage pattern you may code alternate implementations that are specifically optimised for it. In our example of pending orders being frequently queried, we would code an alternate implementation FastPurchaseOrder, that makes use of some pre-computed data structure to keep the pending orders ready for quick access.

Now, in order to benefit from this alternate implementation, you may be tempted to change its interface to add a dedicated method, e.g. selectPendingOrders(). Remember that before you only had a generic selectOrders(Predicate) method. Adding the extra method may be alright in some cases, but may raise several concerns: you must implement this extra method in every other implementation too, and the extra method may be too specific for a particular use-case hence may not fit well on the interface.

A trick for using the internal optimization through the exact same method that only expects predicates is just to make the implementation recognize the predicate it is related to. I call that “Friend Complicity“, in reference to the friend keyword in C++.

/** Optimization method: pre-computed list of pending orders */
private Iterable<PurchaseOrder> selectPendingOrders() {
  // ... optimized stuff...

public Iterable<PurchaseOrder> selectOrders(Predicate<? super PurchaseOrder> condition) {
  // internal complicity here: recognize friend class to enable optimization
  if (condition instanceof PendingOrderPredicate) {
     return selectPendingOrders();// faster way
  // otherwise, back to the usual case
  return Iterables.filter(orders, condition);

It’s clear that it increases the coupling between two implementation classes that should otherwise ignore each other. Also it only helps with performance if given the “friend” predicate directly, with no decorator or composite around.

What’s really important with Friend Complicity is to make sure that the behaviour of the method is never compromised, the contract of the interface must be met at all times, with or without the optimisation, it’s just that the performance improvement may happen, or not. Also keep in mind that you may want to switch back to the unoptimized implementation one day.


If the orders are actually stored in a database, then SQL can be used to query them quickly. By the way, you’ve probably noticed that the very concept of predicate is exactly what you put after the WHERE clause in a SQL query.

Ron Arad designed a chair that encompasses another chair: this is subsumption

A first and simple way to still use predicate yet improve performance is for some predicates to implement an additional interface SqlAware, with a method asSQL(): String that returns the exact SQL query corresponding for the evaluation of the predicate itself. When the predicate is used against a database-backed repository, the repository would call this method instead of the usual evaluate(Predicate) or apply(Predicate) method, and would then query the database with the returned query.

I call that approach SQL-compromised as the predicate is now polluted with database-specific details it should ignore more often than not.

Alternatives to using SQL directly include the use of stored procedures or named queries: the predicate has to provide the name of the query and all its parameters. Double-dispatch between the repository and the predicate passed to it is also an alternative: the repository calls the predicate on its additional method selectElements(this) that itself calls back the right pre-selection method findByState(state): Collection on the repository; the predicate then applies its own filtering on the returned set and returns the final filtered set.


Subsumption is a logic concept to express a relation of one concept that encompasses another, such as “red, green, and yellow are subsumed under the term color” (Merriam-Webster). Subsumption between predicates can be a very powerful concept to implement in your code.

Let’s take the example of an application that broadcasts stock quotes. When registering we must declare which quotes we are interested in observing. We can do that by simply passing a predicate on stocks that only evaluates true for the stocks we’re interested in:

public final class StockPredicate implements Predicate<String> {
   private final Set<String> tickers;
   // Constructors omitted for clarity

   public boolean apply(String ticker) {
     return tickers.contains(ticker);

Now we assume that the application already broadcasts standard sets of popular tickers on messaging topics, and each topic has its own predicates; if it could detect that the predicate we want to use is “included”, or subsumed in one of the standard predicates, we could just subscribe to it and save computation. In our case this subsumption is rather easy, by just adding an additional method on our predicates:

 public boolean encompasses(StockPredicate predicate) {
   return tickers.containsAll(predicate.tickers);
 }Subsumption is all about evaluating another predicate for "containment". This is easy when your predicates are based on sets, as in the example, or when they are based on intervals of numbers or dates. Otherwise You may have to resort to tricks similar to Friend Complicity, i.e. recognizing the other predicate to decide if it is subsumed or not, in a case-by-case fashion.

Overall, remember that subsumption is hard to implement in the general case, but even partial subsumption can be very valuable, so it is an important tool in your toolbox.


Predicates are fun, and can enhance both your code and the way you think about it!


The single source file for this part is available for download cyriux_predicates_part2.zip (fixed broken link)

Read More

A touch of functional style in plain Java with predicates – Part 1

You keep hearing about functional programming that is going to take over the world, and you are still stuck to plain Java? Fear not, since you can already add a touch of functional style into your daily Java. In addition, it’s fun, saves you many lines of code and leads to fewer bugs.

What is a predicate?

I actually fell in love with predicates when I first discovered Apache Commons Collections, long ago when I was coding in Java 1.4. A predicate in this API is nothing but a Java interface with only one method:

evaluate(Object object): boolean

That’s it, it just takes some object and returns true or false. A more recent equivalent of Apache Commons Collections is Google Guava, with an Apache License 2.0. It defines a Predicate interface with one single method using a generic parameter:

apply(T input): boolean

It is that simple. To use predicates in your application you just have to implement this interface with your own logic in its single method apply(something).

A simple example

As an early example, imagine you have a list orders of PurchaseOrder objects, each with a date, a Customer and a state. The various use-cases will probably require that you find out every order for this customer, or every pending, shipped or delivered order, or every order done since last hour.  Of course you can do that with foreach loops and a if inside, in that fashion:

//List<PurchaseOrder> orders...

public List<PurchaseOrder> listOrdersByCustomer(Customer customer) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (order.getCustomer().equals(customer)) {
  return selection;

And again for each case:

public List<PurchaseOrder> listRecentOrders(Date fromDate) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (order.getDate().after(fromDate)) {
  return selection;

The repetition is quite obvious: each method is the same except for the condition inside the if clause, emphasized in bold here. The idea of using predicates is simply to replace the hard-coded condition inside the if clause by a call to a predicate, which then becomes a parameter. This means you can write only one method, taking a predicate as a parameter, and you can still cover all your use-cases, and even already support use-cases you do not know yet:

public List<PurchaseOrder> listOrders(Predicate<PurchaseOrder> condition) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (condition.apply(order)) {
  return selection;

Each particular predicate can be defined as a standalone class, if used at several places, or as an anonymous class:

final Customer customer = new Customer("BruceWaineCorp");
final Predicate<PurchaseOrder> condition = new Predicate<PurchaseOrder>() {
  public boolean apply(PurchaseOrder order) {
    return order.getCustomer().equals(customer);

Your friends that use real functional programming languages (Scala, Clojure, Haskell etc.) will comment that the code above is awfully verbose to do something very common, and I have to agree. However we are used to that verbosity in the Java syntax and we have powerful tools (auto-completion, refactoring) to accommodate it. And our projects probably cannot switch to another syntax overnight anyway.

Predicates are collections best friends

Didn't find any related picture, so here's an unrelated picture from my library

Coming back to our example, we wrote a foreach loop only once to cover every use-case, and we were happy with that factoring out. However your friends doing functional programming “for real” can still laugh at this loop you had to write yourself. Luckily, both API from Apache or Google also provide all the goodies you may expect, in particular a class similar to java.util.Collections, hence named Collections2 (not a very original name).

This class provides a method filter() that does something similar to what we had written before, so we can now rewrite our method with no loop at all:

public Collection<PurchaseOrder> selectOrders(Predicate<PurchaseOrder> condition) {
  return Collections2.filter(orders, condition);

In fact, this method returns a filtered view:

The returned collection is a live view of unfiltered (the input collection); changes to one affect the other.

This also means that less memory is used, since there is no actual copy from the initial collection unfiltered to the actual returned collection filtered.

On a similar approach, given an iterator, you could ask for a filtered iterator on top of it (Decorator pattern) that only gives you the elements selected by your predicate:

Iterator filteredIterator = Iterators.filter(unfilteredIterator, condition);

Since Java 5 the Iterable interface comes very handy for use with the foreach loop, so we’d prefer indeed use the following expression:

public Iterable<PurchaseOrder> selectOrders(Predicate<PurchaseOrder> condition) {
  return Iterables.filter(orders, condition);

// you can directly use it in a foreach loop, and it reads well:
for (PurchaseOrder order : orders.selectOrders(condition)) {

Ready-made predicates

To use predicates, you could simply define your own interface Predicate, or one for each type parameter you need in your application. This is possible, however the good thing in using a standard Predicate interface from an API such as Guava or Commons Collections is that the API brings plenty of excellent building blocks to combine with your own predicate implementations.

First you may not even have to implement your own predicate at all. If all you need is a condition on whether an object is equal to another, or is not-null, then you can simply ask for the predicate:

// gives you a predicate that checks if an integer is zero
Predicate<Integer> isZero = Predicates.equalTo(0);
// gives a predicate that checks for non null objects
Predicate<String> isNotNull = Predicates.notNull();
// gives a predicate that checks for objects that are instanceof the given Class
Predicate<Object> isString = Predicates.instanceOf(String.class);

Given a predicate, you can inverse it (true becomes false and the other way round):


Combine several predicates using boolean operators AND, OR:

Predicates.and(predicate1, predicate2);
Predicates.or(predicate1, predicate2);
// gives you a predicate that checks for either zero or null
Predicate<Integer> isNullOrZero = Predicates.or(isZero, Predicates.isNull());

Of course you also have the special predicates that always return true or false, which are really, really useful, as we’ll see later for testing:


Where to locate your predicates

I often used to make anonymous predicates at first, however they always ended up being used more often so were often promoted to actual classes, nested or not.

By the way, where to locate these predicates? Following Robert C. Martin and his Common Closure Principle (CCP) :

Classes that change together, belong together

Because predicates manipulates objects of a certain type, I like to co-locate them close to the type they take as parameter. For example, the classes CustomerOrderPredicate, PendingOrderPredicate and RecentOrderPredicate should reside in the same package than the class PurchaseOrder that they evaluate, or in a sub-package if you have a lot of them. Another option would be to define them nested within the type itself. Obviously, the predicates are quite coupled to the objects they operate on.


Here are the source files for the examples in this article: cyriux_predicates_part1 (zip)

In the next part, we’ll have a look at how predicates simplify testing, how they relate to Specifications in Domain-Driven Design, and some additional stuff to get the best out of your predicates.

Read More

Key insights that you probably missed on DDD

As suggested by its name, Domain-Driven Design is not only about Event Sourcing and CQRS. It all starts with the domain and a lot of key insights that are too easy to overlook at first. Even if you’ve read the “blue book” already, I suggest you read it again as the book is at the same time wide and deep.

You got talent

The new "spoken" language makes heavy use of the thumb
A new natural language that makes heavy use of your thumbs

Behind the basics of Domain-Driven Design, one important idea is to harness the huge talent we all have: the ability to speak, and this talent of natural language can help us reason about the considered domain.

Just like multi-touch and tangible interfaces aim at reusing our natural strength in using our fingers, Eric Evans suggests that we use our language ability as an actual tool to try out loud modelling concepts, and to test if they pass the simple test of being useful in sentences about the domain.

This is a simple idea, yet powerful. No need for any extra framework or tool, one of the most powerful tool we can imagine is already there, wired in our brain. The trick is to find a middle way between natural language in all its fuzziness, and an expressive model that we can discuss without ambiguity, and this is exactly what the Ubiquitous Language addresses.

One model to rule them all

Another key insight in Domain-Driven Design is to identify -equate- the implementation model with the analysis model, so that there is only one model across every aspect of the software process, from requirements and analysis to code.

This does not mean you must have only one domain model in your application, in fact you will probably get more than one model across the various areas* of the application. But this means that in each area there must be only one model shared by developers and domain experts. This clearly opposes to some early methodologies that advocated a distinct analysis modelling then a separate, more detailed implementation modelling. This also leads naturally to the Ubiquitous Language, a common language between domain experts and the technical team.

The key driver is that the knowledge gained through analysis can be directly used in the implementation, with no gap, mismatch or translation. This assumes of course that the underlying programming language is modelling-oriented, which object oriented languages obviously are.

What form for the model?

Text is supplemented by pictures
Text is supplemented by pictures

Shall the model be expressed in UML? Eric Evans is again quite pragmatic: nothing beats natural language to express the two essential aspects of a model: the meaning of its concepts, and their behaviour. Text, in English or any other spoken language, is therefore the best choice to express a model, while diagrams, standard or not, even pictures, can supplement to express a particular structure or perspective.

If you try to express the entirety of the model using UML, then you’re just using UML as a programming language. Using only a programming language such as Java to represent a model exhibits by the way the same shortcoming: it is hard to get the big picture and to grasp the large scale structure. Simple text documents along with some diagrams and pictures, if really used and therefore kept up-to-date, help explain what’s important about the model, otherwise expressed in code.

A final remark

The beauty in Domain-Driven Design is that it is not just a set of independent good ideas on why and how to build domain models; it is itself a complete system of inter-related ideas, each useful on their own but that also supplement each other. For example, the idea of using natural language as a modelling tool and the idea of sharing one same model for analysis and implementation both lead to the Ubiquitous Language.

* Areas would typically be different Bounded Contexts

Read More

Patterns for using custom annotations

If you happen to create your own annotations, for instance to use with Java 6 Pluggable Annotation Processors, here are some patterns that I collected over time. Nothing new, nothing fancy, just putting everything into one place, with some proposed names.


Local-name annotation

Have your tools accept any annotation as long as its single name (without the fully-qualified prefix) is the expected one. For example com.acme.NotNull and net.companyname.NotNull would be considered the same. This enables to use your own annotations rather than the one packaged with the tools, in order not to depend on them.

Example in the Guice documentation:

Guice recognizes any @Nullable annotation, like edu.umd.cs.findbugs.annotations.Nullable or javax.annotation.Nullable.

Composed annotations

Annotations can have annotations as values. This allows for some complex and tree-like configurations, such as mappings from one format to another (from/to XML, JSon, RDBM).

Here is a rather simple example from the Hibernate annotations documentation:

   joinColumns = @JoinColumn(name="fld_propulsion_fk") 

Multiplicity Wrapper

Java does not allow to use several times the same annotation on a given target.

To workaround that limitation, you can create a special annotation that expects a collection of values of the desired annotation type. For example, you’d like to apply several times the annotation @Advantage, so you create the Multiplicity Wrapper annotation: @Advantages (advantages = {@Advantage}).

Typically the multiplicity wrapper is named after the plural form of its enclosed elements.

Example in Hibernate annotations documentation:

@AttributeOverrides( {
   @AttributeOverride(name="iso2", column = @Column(name="bornIso2") ),
   @AttributeOverride(name="name", column = @Column(name="bornCountryName") )
} )



It is not possible in Java for annotations to derive from each other. To workaround that, the idea is simply to annotate your new annotation with the “super” annotation, which becomes a meta annotation.

Whenever you use your own annotation with a meta-annotation, the tools will actually consider it as if it was the meta-annotation.

This kind of meta-inheritance helps centralize the coupling to the external annotation in one place, while making the semantics of your own annotation more precise and meaningful.

Example in Spring annotations, with the annotation @Component, but also works with annotation @Qualifier:

Create your own custom stereotype annotation that is itself annotated with @Component:

public @interface MyComponent {
String value() default "";
public class MyClass...

Another example in Guice, with the Binding Annotation:

public @interface PayPal {}

// Then use it
public class RealBillingService implements BillingService {
  public RealBillingService(@PayPal CreditCardProcessor processor,
      TransactionLog transactionLog) {

Refactoring-proof values

Prefer values that are robust to refactorings rather than String litterals. MyClass.class is better than “com.acme.MyClass”, and enums are also encouraged.

Example in Hibernate annotations documentation:

@ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE}, targetEntity=CompanyImpl.class )

And another example in the Guice documentation:


Configuration Precedence rule

Convention over Configuration and Sensible Defaults are two existing patterns that make a lot of sense with respect to using annotations as part of a configuration strategy. Having no need to annotate is way better than having to annotate for little value.

Annotations are by nature embedded in the code, hence they are not well-suited for every case of configuration, in particular when it comes to deployment-specific configuration. The solution is of course to mix annotations with other mechanisms and use each of them where they are more appropriate.

The following approach, based on precedence rule, and where each mechanism overrides the previous one, appears to work well:

Default value < Annotation < XML < programmatic configuration

For example, the default values could be suited for unit testing, while the annotation define all the stable configuration, leaving the other options to  configure for deployments at the various stages, like production or QA environments.

This principle is common (Spring, Java 6 EE among others), for example in JPA:

The concept of configuration by exception is central to the JPA specification.


This post is mostly a notepad of various patterns on how to use annotations, for instance when creating tools that process annotations, such as the Annotation Processing Tools in Java 5 and the Pluggable Annotations Processors in Java 6.

Don’t hesitate to contribute better patterns names, additional patterns and other examples of use.

EDIT: A related previous post, with a focus on how annotations can lead to coupling hence dependencies.

Pictures Creative Commons from Flicker, by ninaksimon and Iwan Gabovitch.

Read More

What next language will you choose?

Now that enterprises have chosen stable platforms (JVM and .Net), on top of which we can choose a syntax out of many available, which language will you pick up as your next favourite?

New languages to have a look at (my own selection)

Based on what I read everyday in various blogs, I arbitrarily reduced the list of candidates to just a few language that I believe are rising and promising:

  • Scala
  • F#
  • Clojure
  • Ruby
  • Groovy

Of course each language has its own advantages, and we should definitely take a look at several of them, not just one. But the popularity of the languages is also important to have a chance of using them in your day work.

Growth rates stats

In order to get some facts about the trends of these new programming languages in the real world, Google trends is your friend. Here is the graph of rate of growth (NOT absolute numbers), worldwide:

I’m impressed at how Clojure is taking off so brutally… In absolute terms, Ruby comes first, Groovy second.

So that was for the search queries on Google. What about the job posts? Again these are growth rates, not absolute numbers, and with a focus on the US.

Again, the growth of Clojure is impressive, even though it remains very small in absolute terms, where Ruby comes first, followed by Groovy.

Popularity index

So far the charts only showed how new languages are progressing compared to each other.

To get an indication of the actual present popularity of each language, the usual place to go is at the TIOBE indices (last published June 2010):

The TIOBE Programming Community index gives an indication of the popularity of programming languages. The index is updated once a month. The ratings are based on the number of skilled engineers world-wide, courses and third party vendors. The popular search engines Google, MSN, Yahoo!, Wikipedia and YouTube are used to calculate the ratings.

In short:

  • Ruby is ranked 12th; Ruby is kinda mainstream already
  • LISP/Scheme/Clojure are ranked together 16th, almost the same rank than in 2005 when Clojure did not exist yet.
  • Scala, Groovy, F#/Caml are ranked 43, 44 and 45th respectively


Except Ruby, the new languages Scala, Groovy, F# and Clojure are not yet well established, but they do progress quickly, especially Clojure then Scala.

In absolute terms, and within my selection of languages, Groovy is the more popular after Ruby, followed by Scala. Clojure and F# are still far behind.

I have a strong feeling that the time has come for developers to mix alternate syntax in addition to their legacy language (Java or C#), still on top of the same platform, something also called polyglot programming. It’s also time for the ideas of functional programming to become more popular.

In practice, these new languages are more likely to be introduced initially for automated testing and build systems, before being used for production code, especially with high productivity web frameworks that leverage their distinct strengths.

So which one to choose? If you’re already on the JVM, why not choose Groovy or Scala first, then Clojure to get a grasp on something different. If you’re on .Net, why not choose Scala and then F#. Ad if you were to choose Ruby, I guess you would have done it already.

Read More

Shanghai World Expo 2010

In late May we’ve been to the Shanghai World Expo 2010 for two days. Here are pictures and feedback on this huge event.

As for our overall impression: if you happen to go to Shanghai, you may go see the Expo 2010, but don’t plan to go there just for that. Just as Adam Minter noted in his blog (quite useful to prepare how to visit the expo):

if you’ve traveled internationally, there’s absolutely nothing inside of any of the pavilions that you haven’t seen before

This sums up our visit quite well, I was almost disappointed in fact.

Huge spaces, huge queues

For the two days we’ve been there, the queues were absolutely terrific, up to 3h30 for the Japan pavilion and the Oil 4D movie for example, and just a bit less for the most demanded pavilions. This also means we did not visit them, waiting that long is not our cup of tea.

The 3 days tickets were no longer available, we could only buy tickets for the day, not even for the next day, meaning you have to buy them each morning. Many shops that advertise they sell tickets no longer have any (we are talking 20 days after the official opening).

We also had the feeling that the Expo definitely targets only Chinese citizens, not foreigners, for example the movies have no subtitles.

Big big thanks to all the volunteers in the Expo!  These young girls and boys that speak English are all really nice and helpful, they are the real -and only- face of the Expo, and they make the huge and rather unfriendly outdoor spaces easier to deal with. And you know what? They are not paid, but they don’t even have the right to visit themselves, apart from buying their own ticket… (at least they told me that when I asked)

Our favourite pavilions

The Dutch pavilion is my favourite, with its crazy little town in the air.

The Dutch pavilion
The Dutch pavilion

Each house in this weird village features interesting works by artists and designers, and the ground level is free and offers a place to rest on the fake grass. The principle of the “main street” with a continuous flow of visitors just means you don’t have to wait in a queue, and the view all around is beautiful, especially at the top.

The Australian pavilion is also great, for the humour of the static presentation, and also for the show which is both technically fascinating and is one of the few to address the theme of the Expo: “better city, better life”.

In this pavilion again, the continuous flow of visitors plus a smart way to enter and exit the theater makes the queue not much of a problem.

The General Motors pavilion offers a nice anticipation of what the future of the car might be, at least in countries like China, where everything is possible. Queue time was between 1h down to 20mn when we eventually visited it. I did appreciate how a brand like GM had the courage to present such a technical dream, although it reminds similar ideas from Toyota. The queue time remains reasonable thanks to no less than 4 theaters operating at the same time around a shared stage.

The pavilion of Denmark (also continuous flow, hence no queue) is nice, with its little mermaid and the funny presentation of “how we live in Denmark” (rather weird to be honest), and its crazy and convoluted bench, and again a great view at the top.

Other pavilions with not much queuing that we enjoyed: Vietnam (skinned with bamboo), New Zealand (again a continuous flow, also addresses the theme with the question: “what is better life”), Nepal (smart design in two phases, where not many people actually go to the second phase, another good idea to reduce the queue), Cuba (smart small pavilion, with just a good bar where you can buy a good drink), State Grid (with its immersive video cube), Japan Business (promotional but not too bad), CSSC (exhibition about sea ships now and in the future), and the Czech pavilion where everything is presented on the ceiling.

And a special mention for the North Korea pavilion, self-entitled “Paradise for People“, which is a summum of kitsch with its fountain and a “back to the 80’s” institutional video. Hard to explain that feeling…

And now some pictures in random order.

Architectural madness

If you like architecture, and we do, then on the outside the Expo offers a real-size gallery of architects dreams. Each pavilion is usually built to last 6 months and must be impressive enough yet green enough at the same time. The most common strategy used to that end makes use of a thin skin of something (anything lightweight indeed) to wrap a more conventional or temporary construction.

Pavilions that I find especially beautiful from the outside are the pavilions of Spain (with large petals made of small wood branches), Emirates (all golden metal), South Korea (geometric volumes in laser-cut aluminium), Australia, Germany, UK (always looks like 3D rendering, even for real), Vietnam (bamboo), Denmark, Canada (all skinned in wood), Russia, Estonia (colourful), Mexico (with the wings), Portugal (a nice mix of natural cork and red neon) and Luxembourg (looks like a monster house).

Movies and LED screens everywhere

World Expo are supposed to demonstrate stuff never seen before, or at least impressive and novel techniques and ideas; we must be very bored then, or there is nothing novel left today (which I don’t believe so), or novel things are too complex to explain or too abstract to demonstrate, or perhaps we already know everything thanks to the Internet, but every major pavilion is more or less entered around a show in a theater using immersive techniques such as 3D and every possible kind of screen madness.

LED technology is also the grand winner of the Expo 2010, with giant LED screen everywhere outside and inside each pavilion. Video-projection resists, with some video-mapping techniques, but has already lost the battle.


Pictures taken with a small Canon compact camera, in the usual Shanghai fog.

Read More

Huang Shan Mountain – China 2010

La chaine de montagnes Huang Shan (Montagne Jaune) est un site avec des paysages absolument fabuleux. Cette beaute se merite avec ses pieds, au prix d’une ascension de pres d’une journee. Nous avons passe la nuit en haut pour pouvoir assister au fameux lever de soleil qui apparait au dessus de la mer de nuages, un spectacle vraiment magnifique. Et au retour nous avons fait escale dans le petit et charmant village de Hongcun.

Read More

Domain-Driven Design: where to find inspiration for Supple Design? [part2]

This is the second part of this topic, you can find the first part here.

Maths (cont’ed)

Abstract algebra

Another classical example of mathematical concept that I find insightful for modeling is the concept of Group in the field of Abstract Algebra. To be honest, I’d guess this is what Eric Evans had in mind when he talks about Closure of Operations (where it fits, define an operation whose return type is the same as the type of its arguments) or when he mentions arithmetic as a known formalism:

Draw on Established Formalisms, When You Can […] There are many such formalized conceptual frameworks, but my personal favorite is math. […] It is surprising how useful it can be to pull out some twist on basic arithmetic.

The usual integer numbers and their addition operation is an example of Groups that everybody knows (although integers are also examples of other kind of algebraic structures).

Limits as Special Cases

Going further, the concept of zero and of limits such as infinity can be inspirations for your domain. Consider for example the domain element Currency, with values EUR, USD, JPY etc. Traders can typically request to monitor prices for one currency, but they can also request to see all currencies or none.

Just like mathematicians do when they invent limit concept such as infinity or zero (a special value that does not really exist in the world but that is convenient), we can expand Currency with the Special Case values ALL and NONE, each with their own special behavior, i.e. behave like any currency or none at all (typically this means their methods return true or false respectively all the time). Trust me, it is very convenient.

You’ll recognize that the Null Object pattern generalizes the very concept of zero for arbitrary concepts.


Dimensions in the domain

Something I clearly learnt from maths classes at school is thinking in separate dimensions, and how looking for orthogonal dimensions helps tremendously.

Given a cloud of concepts spread over the whiteboard, if you can sort them out onto 1, 2 or 3 separate dimensions (the less the better), and if these dimensions are orthogonal (independent to each other), then your design gets greatly enhanced, provided of course it still makes sense in the domain.

For example we may have a dimension that groups the concepts of user, user group and company, that is probably orthogonal to another dimension that groups the concepts of financial instrument, product segment and market. Once the two dimensions are identified, we’ll seldom discuss concepts from both dimensions at the same time, because we know that they are independent.

Yet simple, the formalism of dimension is valuable as an inspiration for supple design. Also once you think of explicit dimensions you may consider extrapolating known concepts such as ordering or interval, if they make sense for your domain. In other words your domain gets more formal, which at the end of the day, once coded, will means less code and less bugs.

Symmetries in the domain

This chair has 4 identical legs: lots of symmetries

One of the biggest possible inspiration for Supple Design is the idea of symmetry, as Paul Rayner quotes from Eric Evans in the 5th part of his series: Domain-Driven Design (DDD) Immersion Course – Part #5

Use symmetry as principle of Supple Design

I love this idea! Indeed, great physicists like Murray Gell-Mann have discovered a lot guided by symmetry.

When you look at elements of the domain, you can choose to focus on how we deal with them, rather than looking at what they are. Considering each perspective in turn helps to find out what properties the domain elements may have in common (for more see don’t make things artificially different).

Sometimes concepts looks fundamentally different just because some of them are singular whereas other are plural. The Composite pattern offers a solution for that. You can unveil symmetry just by looking from the right viewpoint.

Deeper insight than the domain experts?

Domain-Driven Design advises that the domain shall not deviate from concepts and ideas that the users really know and really talk of. However it may happen that by analysing the domain you uncover structures than the people in the business are not even aware of. In fact they may well manipulate the structure in practice without being conscious of them.

I’ve come across such a case in interest rates derivatives trading, where swap traders think of spreads and butterfly as products by themselves, and intuitively know how to combine them for their purpose. However when you want a machine to manipulate these products, you’d better formalize a bit the concepts, that are otherwise intuitive. In our case, simple linear combinations proved to be a good formal framework for spreads and butterflies, even though hardly any traders speaks of the topic like that.

Influences everywhere

I recommend browsing various fields for behaviours or structures that look similar to the domain being modelled. This yields benefits even if you don’t find a corresponding structure, since when you compare your problem to another one you must ask good questions that will help get insights anyway.

With some practice you may even anticipate, and whatever concept you encounter gets automatically “indexed” in your brain just in case you would need it later. No need to remember the concept, just that it exists.

What’s important is not that you know everything, there are books and Wikipedia for that. What’s important is to be ready to dedicate some time looking for a formalism that is well-suited for your domain.

The more demanding you are when making the Design Supple, the more reward you’ll get over time, and by this I mean less ad hoc adjustments, error corrections and tricky handling of special cases.

Read More

Domain-Driven Design: where to find inspiration for Supple Design? [part1]

Domain-Driven Design encourages to analyse the domain deeply in a process called Supple Design. In his book (the blue book) and in his talks Eric Evans gives some examples of this process, and in this blog I suggest some sources of inspirations and some recommendations drawn from my practice in order to help about this process.

When a common formalism fits the domain well, you can factor it out and adapt its rules to the domain.

A known formalism can be reused as a ready-made, well understood model.

Obvious sources of inspiration

Analysis patterns

It is quite obvious in the book, DDD builds clearly on top of Martin Fowler analysis patterns. The patterns Knowledge Level (aka Meta-Model), and Specification (a Strategy used as a predicate) are from Fowler, and Eric Evans mentions using and drawing insight from analysis patterns many times in the book.Analysis Patterns: Reusable Object Models (Addison-Wesley Object Technology Series)

Reading analysis patterns helps to appreciate good design; when you’ve read enough analysis patterns, you don’t even have to remember them to be able to improve your modelling skills. In my own experience, I have learnt to look for specific design qualities such as explicitness and traceability in my design as a result of getting used to analysis patterns such as Phenomenon or Observation.

Design patterns

Design patterns are another source of inspiration, but usually less relevant to domain modelling. Evans mentions the Strategy pattern, also named Policy (I rather like using an alternative name to make it clear that we are talking about the domain, not about a technical concerns), and the pattern Composite. Evans suggests considering other patterns, not just the GoF patterns, and to see whether they make sense at the domain level.

Programming paradigms

Eric Evans also mentions that sometimes the domain is naturally well-suited for particular approaches (or paradigms) such as state machines, predicate logic and rules engines. Now the DDD community has already expanded to include event-driven as a favourite paradigm, with the  Event-Sourcing and CQRS approaches in particular.

On paradigms, my design style has also been strongly influenced by elements of functional programming, that I originally learnt from using Apache Commons Collections, together with a increasingly pronounced taste for immutability.


It is in fact the core job of mathematicians to factor out formal models of everything we experience in the world. As a result it is no surprise we can draw on their work to build deeper models.

Graph theory

The great benefit of any mathematical model is that it is highly formal, ready with plenty of useful theorems that depend on the set of axioms you can assume. In short, all the body of maths is just work already done for you, ready for you to reuse. To start with a well-known example, used extensively by Eric Evans, let’s consider a bit of graph theory.

If you recognize that your problem is similar (mathematicians would say isomorphic or something like that) to a graph, then you can jump in graph theory and reuse plenty of exciting results, such as how to compute a shortest-path using a Dijkstra or A* algorithm. Going further, the more you know or read about your theory, the more you can reuse: in other words the more lazy you can be!

In his classical example of modelling cargo shipping using Legs or using Stops, Eric Evans, could also refer to the concept of Line Graph, (aka edge-to-vertex dual) which comes with interesting results such as how to convert a graph into its edge-to-vertex dual.

Trees and nested sets

Other maths concepts common enough include trees and DAG, which come with useful concepts such as the topological sort. Hierarchy containment is another useful concept that appear for instance in every e-commerce catalog. Again, if you recognize the mathematical concept hidden behind your domain, then you can then search for prior knowledge and techniques already devised to manipulate the concept in an easy and correct way, such as how to store that kind of hierarchy into a SQL database.

Don’t miss the next part: part 2

  • Maths continued
  • General principles

Read More

Reuse is not only code reuse

Reuse is often assumed to mean code reuse, but there are many other kinds of reuse. You can also reuse ideas, and even reuse the process of applying ideas.

Why reuse?

We want to reuse prior work or knowledge for several reasons:

  1. It is easier to reuse than to redo the effort that would lead to the same result.
  2. We want the best result, and the simplest way to ensure that is to reuse the solution considered to be the best in its field.
  3. We want to keep something in only one place, following the DRY principle.

Ready-made reuse

Circuit-bending is about reusing old toys to create weird musical instruments

Reuse is often assumed to mean code reuse, or at least reuse of something already written:

  • API, library or framework: code is packaged to be easy to (re)-use, and perhaps easy to customize.
  • Code snippet: example code is provided for copy-and-paste inclusion into the code under construction. Code snippets can also be tweaked.
  • Declaration files templates: many frameworks or build tools supply declarations files that can be either reused and tweaked, or imported and modified through some inheritance mechanism, such as Ant build files.
  • Aspects: Aspects isolate how to address cross-cutting concerns, and can be reused as coded aspects in AspectJ for example. However applying the aspect must be done by the developer.

Ready-made reuse is the simpler kind of reuse: take it and use it, as it is. It is a no-brainer. The drawback is that this cookie-cutter approach lacks flexibility to adapt to each particular case, unless you leave the road and start to customize at your own risk.

Ready-made API’s often try to workaround this lack of flexibility through extensive options of configuration.

Reuse of ideas

However there is another kind of reuse, in the form of ideas to apply each time into the project under construction. This knowledge prepared and canned in advance can take various forms:

  • Algorithms: an algorithm is often reused without code reuse, by applying the algorithm on the particular data structure at hand.
  • Patterns: patterns are an obvious mean of reuse, and most of the time they must be applied into the code in progress, rather than imported through an API or through libraries.
  • Specifications: knowing the “what to do” can also be reused, we can reuse specifications. This is especially true when re-building from scratch an existing legacy application: the legacy application can be reused as a working piece of specification, rather than reusing its actual code (which we usually do not want to reuse).
  • Issues History: the history of issues already encountered and stored in a bug tracker has value for reuse too, because it can be considered a check list of possible traps when refactoring or when starting another similar project.

When reusing ideas, you have to think deeper, which is good for your brain. But because you have to take decisions to apply each idea you reuse, it takes more time and more effort, and the end result will also depend more closely on your design and coding skills.

But on the other hand, you get the maximal flexibility, or how to build the solution that perfectly matches the needs of each particular situation. This is invaluable, and this is why ready-made reuse is not always desirable.

Automated reuse of ideas

When ready-made reuse is not desirable, it may remain desirable to automate the process of applying codified ideas to a project under construction. The twist here is to reuse the process of applying ideas, rather than the ideas themselves.

Several examples of this approach already exist:

  • Code generation, method and field injection: generation and injection of code is a process to enable reuse of prior knowledge and effort, concentrated into a generator. For example in the tool JiBX, the knowledge of how to serialize beans into XML is canned into a bytecode generator.
  • Macros and C++ templates: just like code generation, macros and templates are processes that apply codified ideas to program elements in an automated fashion. Note that this is not the case with Java or .Net generics, which are just plain (generic, really) code reuse.
  • Middleware magic: interception, dynamic proxies and other decorators created behind the scene by most JEE application servers are a form of reusing ideas by applying them to each specific entity or service.
  • Compiler or VM optimization: automatic compiler optimizations have removed the need to manually optimize code for most cases, thanks to automatic application of many optimization techniques.

If we want to automate further the application of ideas onto code, we need better ways to guide the tools. For a tool to apply an idea onto existing code, you need to describe what you want to the tool. If the required amount of user input is excessive or if it is impossible to tell a tool your intent, you’d better of doing the work manually. This is the limiting factor toward more powerful tools.

More explicit, more reuse

By making intents and concepts more explicit in the code, tools can increasingly automate tasks that otherwise would remain manual tasks:

  • Maven POM: Maven makes project information explicit in its project object model (POM), and as a result each plugin can configure itself. A plugin that needs to know where to find the source code files can access the information by itself, and this can be seen as a reuse of a predefined configuration. In contrast, with Ant you must configure the information repeatedly for each task.

Imagine now a cross-cutting concern “Auditability” which requires to “record every action that changes the state of an entity”.

In other words we need to record every action that is not safe, as defined in REST. If we can explicitly declare on each operation and service whether it is safe or not (e.g. using annotations), then a tool can automatically weave a predefined Aspect for Auditability, without any specific configuration.

This example illustrates how we can reuse both the aspect (ready-made reuse) and the configuration of its pointcuts (i.e. reuse of the process of applying the aspect), thanks to a new concept (Safe) made explicit in the code.

I believe there are still many other forms of knowledge that could be reused automatically, and this looks like an exciting area of investigation.

Read More