Java Enums: You have grace, elegance and power and this is what I Love!

While Java 8 is coming, are you sure you know well the enums that were introduced in Java 5? Java enums are still underestimated, and it’s a pity since they are more useful than you might think, they’re not just for your usual enumerated constants!

Java enum is polymorphic

Java enums are real classes that can have behavior and even data.

Let’s represent the Rock-Paper-Scissors game using an enum with a single method. Here are the unit tests to define the behavior:

public void paper_beats_rock() {
public void scissors_beats_paper() {
public void rock_beats_scissors() {
And here is the implementation of the enum, that primarily relies on the ordinal integer of each enum constant, such as the item N+1 wins over the item N. This equivalence between the enum constants and the integers is quite handy in many cases.
/** Enums have behavior! */
public enum Gesture {
	ROCK() {
		// Enums are polymorphic, that's really handy!
		public boolean beats(Gesture other) {
			return other == SCISSORS;

	// we can implement with the integer representation
	public boolean beats(Gesture other) {
		return ordinal() - other.ordinal() == 1;

Notice that there is not a single IF statement anywhere, all the business logic is handled by the integer logic and by the polymorphism, where we override the method for the ROCK case. If the ordering between the items was not cyclic we could implement it just using the natural ordering of the enum, here the polymorphism helps deal with the cycle.

You can do it without any IF statement! Yes you can!

This Java enum is also a perfect example that you can have your cake (offer a nice object-oriented API with intent-revealing names), and eat it too (implement with simple and efficient integer logic like in the good ol’ days).

Over my last projects I’ve used a lot enums as a substitute for classes: they are guaranted to be singleton, have ordering, hashcode, equals and serialization to and from text all built-in, without any clutter in the source code.

If you’re looking for Value Objects and if you can represent a part of your domain with a limited set of instances, then the enum is what you need! It’s a bit like the Sealed Case Class in Scala, except it’s totally restricted to a set of instances all defined at compile time. The bounded set of instances at compile-time is a real limitation, but now with continuous delivery, you can probably wait for the next release if you really need one extra case.

Well-suited for the Strategy pattern

Let’s move to to a system for the (in-)famous Eurovision song contest; we want to be able to configure the behavior on when to notify (or not) users of any new Eurovision event. It’s important. Let’s do that with an enum:

/** The policy on how to notify the user of any Eurovision song contest event */
public enum EurovisionNotification {

	/** I love Eurovision, don't want to miss it, never! */
		public boolean mustNotify(String eventCity, String userCity) {
			return true;

	 * I only want to know about Eurovision if it takes place in my city, so
	 * that I can take holidays elsewhere at the same time
		// a case of flyweight pattern since we pass all the extrinsi data as
		// arguments instead of storing them as member data
		public boolean mustNotify(String eventCity, String userCity) {
			return eventCity.equalsIgnoreCase(userCity);

	/** I don't care, I don't want to know */
	NEVER() {
		public boolean mustNotify(String eventCity, String userCity) {
			return false;

	// no default behavior
	public abstract boolean mustNotify(String eventCity, String userCity);


And a unit test for the non trivial case ONLY_IF_IN_MY_CITY:

public void notify_users_in_Baku_only() {
	assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", "BAKU")).isTrue();
	assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", Paris")).isFalse();

Here we define the method abstract, only to implement it for each case. An alternative would be to implement a default behavior and only override it for each case when it makes sense, just like in the Rock-Paper-Scissors game.

Again we don’t need the switch on enum to choose the behavior, we rely on polymorphism instead. You probably don’t need the switch on enum much, except for dependency reasons. For example when the enum is part of a message sent to the outside world as in Data Transfer Objects (DTO), you do not want any dependency to your internal code in the enum or its signature.

For the Eurovision strategy, using TDD we could start with a simple boolean for the cases ALWAYS and NEVER. It would then be promoted into the enum as soon as we introduce the third strategy ONLY_IF_IN_MY_CITY. Promoting primitives is also in the spirit of the 7th rule “Wrap all primitives” from the Object Calisthenics, and an enum is the perfect way to wrap a boolean or an integer with a bounded set of possible values.

Because the strategy pattern is often controlled by configuration, the built-in serialization to and from String is also very convenient to store your settings.

Perfect match for the State pattern

Just like the Strategy pattern, the Java enum is very well-suited for finite state machines, where by definition the set of possible states is finite.

A baby as a finite state machine (picture from

Let’s take the example of a baby simplified as a state machine, and make it an enum:

 * The primary baby states (simplified)
public enum BabyState {


	private final BabyState next;

	private BabyState(BabyState next) { = next;

	public BabyState next(boolean discomfort) {
		if (discomfort) {
			return CRY;
		return next == null ? EAT : next;

And of course some unit tests to drive the behavior:

public void eat_then_sleep_then_poop_and_repeat() {

public void if_discomfort_then_cry_then_eat() {

Yes we can reference enum constants between them, with the restriction that only constants defined before can be referenced. Here we have a cycle between the states EAT -> SLEEP -> POOP -> EAT etc. so we need to open the cycle and close it with a workaround at runtime.

We indeed have a graph with the CRY state that can be accessed from any state.

I’ve already used enums to represent simple trees by categories simply by referencing in each node its elements, all with enum constants.

Enum-optimized collections

Enums also have the benefits of coming with their dedicated implementations for Map and Set: EnumMap and EnumSet.

These collections have the same interface and behave just like your regular collections, but internally they exploit the integer nature of the enums, as an optimization. In short you have old C-style data structures and idioms (bit masking and the like) hidden behind an elegant interface. This also demonstrate how you don’t have to compromise your API’s for the sake of efficiency!

To illustrate the use of these dedicated collections, let’s represent the 7 cards in Jurgen Appelo’s Delegation Poker:

public enum AuthorityLevel {

	/** make decision as the manager */

	/** convince people about decision */

	/** get input from team before decision */

	/** make decision together with team */

	/** influence decision made by the team */

	/** ask feedback after decision by team */

	/** no influence, let team work it out */

There are 7 cards, the first 3 are more control-oriented, the middle card is balanced, and the 3 last cards are more delegation-oriented (I made that interpretation up, please refer to his book for explanations). In the Delegation Poker, every player selects a card for a given situation, and earns as many points as the card value (from 1 to 7), except the players in the “highest minority”.

It’s trivial to compute the number of points using the ordinal value + 1. It is also straightforward to select the control oriented cards by their ordinal value, or we can use a Set built from a range like we do below to select the delegation-oriented cards:

	public int numberOfPoints() {
		return ordinal() + 1;

	// It's ok to use the internal ordinal integer for the implementation
	public boolean isControlOriented() {
		return ordinal() < AGREE.ordinal();

	// EnumSet is a Set implementation that benefits from the integer-like
	// nature of the enums
	public static Set DELEGATION_LEVELS = EnumSet.range(ADVISE, DELEGATE);

	// enums are comparable hence the usual benefits
	public static AuthorityLevel highest(List levels) {
		return Collections.max(levels);

EnumSet offers convenient static factory methods like range(from, to), to create a set that includes every enum constant starting between ADVISE and DELEGATE in our example, in the declaration order.

To compute the highest minority we start with the highest card, which is nothing but finding the max, something trivial since the enum is always comparable.

Whenever we need to use this enum as a key in a Map, we should use the EnumMap, as illustrated in the test below:

// Using an EnumMap to represent the votes by authority level
public void votes_with_a_clear_majority() {
	final Map<AuthorityLevel, Integer> votes = new EnumMap(AuthorityLevel.class);
	votes.put(SELL, 1);
	votes.put(ADVISE, 3);
	votes.put(INQUIRE, 2);

Java enums are good, eat them!

I love Java enums: they’re just perfect for Value Objects in the Domain-Driven Design sense where the set of every possible values is bounded. In a recent project I deliberatly managed to have a majority of value types expressed as enums. You get a lot of awesomeness for free, and especially with almost no technical noise. This helps improve my signal-to-noise ratio between the words from the domain and the technical jargon.

Or course I make sure each enum constant is also immutable, and I get the correct equals, hashcode, toString, String or integer serialization, singleton-ness and very efficient collections on them for free, all that with very little code.

(picture from – Jim Barnabee article)”]
The power of polymorphism

The enum polymorphism is very handy, and I never use instanceof on enums and I hardly need to switch on the enum either.

I’d love that the Java enum is completed by a similar construct just like the case class in Scala, for when the set of possible values cannot be bounded. And a way to enforce immutability of any class would be nice too. Am I asking too much?

Also <troll>don’t even try to compare the Java enum with the C# enum…</troll>

Read More

The untold art of Composite-friendly interfaces

The Composite pattern is a very powerful design pattern that you use regularly to manipulate a group of things through the very same interface than a single thing. By doing so you don’t have to discriminate between the singular and plural cases, which often simplifies your design.

Yet there are cases where you are tempted to use the Composite pattern but the interface of your objects does not fit quite well. Fear not, some simple refactorings on the methods signatures can make your interfaces Composite-friendly, because it’s worth it.

Always start with examples

Imagine an interface for a financial instrument with a getter on its currency:

public interface Instrument {
  Currency getCurrency();

This interface is alright for a single instrument, however it does not scale for a group of instruments (Composite pattern), because the corresponding getter in the composite class would look like (notice that return type is now a collection):

public class CompositeInstrument {
  // list of instruments...

  public Set getCurrencies() {...}

We must admit that each instrument in a composite instrument may have a different currency, hence the composite may be multi-currency, hence the collection return type. This breaks the goal of the Composite pattern which is to unify the interfaces for single and multiple elements. If we stop there, we now have to discriminate between a single Instrument and a CompositeInstrument, and we have to discriminate that on every call site. I’m not happy with that.

The composite pattern applied to a lamp: same plug for one or several lamps

The brutal approach

The brutal approach is to generalize the initial interface so that it works for the composite case:

public interface Instrument {
  Set getCurrencies() ;

This interface now works for both the single case and the composite case, but at the cost of always having to deal with a collection as return value. In fact I’m not that sure that we’ve simplified our design with this approach: if the composite case is not used that often, we even have complicated the design for little benefit, because the returned collection type always goes on our way, requiring a loop every time it is called.

The trick to improve that is just to investigate what our interface is really used for. The getter on the initial interface only reveals that we did not think about the actual use before, in other words it shows a design decision “by default”, or lack of.

Turn it into a boolean method

Very often this kind of getter is mostly used to test whether the instrument (single or composite) has something to do with a given currency, for example to check if an instrument is acceptable for a screen in USD or tradable by a trader who is only granted the right to trade in EUR.

In this case, you can revamp the method into another intention-revealing method that accepts a parameter and returns a boolean:

public interface Instrument {
  boolean isInCurrency(Currency currency);

This interface remains simple, is closer to our needs, and in addition it now scales for use with a Composite, because the result for a Composite instrument can be derived from each result on each single instrument and the AND operator:

public class CompositeInstrument {
  // list of instruments...

  public boolean isInCurrency(Currency currency) {
     boolean result;
     // for each instrument, result &= isInCurrency(currency);
     return result;

Something to do with Fold

As shown above the problem is all about the return value. Generalizing on boolean and their boolean logic from the previous example (‘&=’), the overall trick for a Composite-friendly interface is to define methods that return a type that is easy to fold over successive executions. For example the trick is to merge (“fold”) the boolean result of several calls into one single boolean result. You typically do that with AND or OR on boolean types.

If the return type is a collection, then you could perhaps merge the results using addAll(…) if it makes sense for the operation.

Technically, this is easily done when the return type is closed under an operation (magma), i.e. when the result of some operation is of the same type than the operand, just like ‘boolean1 AND boolean2‘ is also a boolean.

This is obviously the case for boolean and their boolean logic, but also for numbers and their arithmetic, collections and their sets operations, strings and their concatenation, and many other types including your own classes, as Eric Evans suggests you favour “Closure of Operations” in his book Domain-Driven Design.

Fire hydrants: from one pipe to multiple pipes (composite)

Turn it into a void method

Though not possible in our previous example, void methods work very well with the Composite pattern: with nothing to return, there is no need to unify or fold anything:

public class CompositeFunction {
  List functions = ...;

  public void apply(...) {
     // for each function, function.apply(...);

Continuation-passing style

The last trick to help with the Composite pattern is to adopt the continuation passing style by passing a continuation object as a parameter to the method. The method then sets its result into it instead of using its return value.

As an example, to perform search on every node of a tree, you may use a continuation like this:

public class SearchResults {
   public void addResult(Node node){ // append to list of results...}
   public List getResults() { // return list of results...}

public class Node {
  List children = ...;

  public void search(SarchResults sr) {
     if (found){
     // for each child,;

By passing a continuation as argument to the method, the continuation takes care of the multiplicity, and the method is now well suited for the Composite pattern. You may consider that the continuation indeed encapsulates into one object the process of folding the result of each call, and of course the continuation is mutable.

This style does complicates the interface of the method a little, but also offers the advantage of a single allocation of one instance of the continuation across every call.

That's continuation passing style (CC Some rights reserved by 2011 BUICK REGAL)

One word on exceptions

Methods that can throw exceptions (even unchecked exceptions) can complicate the use in a composite. To deal with exceptions within the loop that calls each child, you can just throw the first exception encountered, at the expense of giving up the loop. An alternative is to collect every caught exception into a Collection, then throw a composite exception around the Collection when you’re done with the loop. On some other cases the composite loop may also be a convenient place to do the actual exception handling, such as full logging, in one central place.

In closing

We’ve seen some tricks to adjust the signature of your methods so that they work well with the Composite pattern, typically by folding the return type in some way. In return, you don’t have to discriminate manually between the single and the multiple, and one single interface can be used much more often; this is with these kinds of details that you can keep your design simple and ready for any new challenge.

Follow me on Twitter! Credits: Pictures from myself, except the assembly line by BUICK REGAL (Flickr)

Read More

A touch of functional style in plain Java with predicates – Part 2

In the first part of this article we introduced predicates, which bring some of the benefits of functional programming to object-oriented languages such as Java, through a simple interface with one single method that returns true or false. In this second and last part, we’ll cover some more advanced notions to get the best out of your predicates.


One obvious case where predicates shine is testing. Whenever you need to test a method that mixes walking a data structure and some conditional logic, by using predicates you can test each half in isolation, walking the data structure first, then the conditional logic.

In a first step, you simply pass either the always-true or always-false predicate to the method to get rid of the conditional logic and to focus just on the correct walking on the data structure:

// check with the always-true predicate
final Iterable<PurchaseOrder> all = orders.selectOrders(Predicates.<PurchaseOrder> alwaysTrue());
assertEquals(2, Iterables.size(all));

// check with the always-false predicate
assertTrue(Iterables.isEmpty(orders.selectOrders(Predicates.<PurchaseOrder> alwaysFalse())));

In a second step, you just test each possible predicate separately.

final CustomerPredicate isForCustomer1 = new CustomerPredicate(CUSTOMER_1);
assertTrue(isForCustomer1.apply(ORDER_1)); // ORDER_1 is for CUSTOMER_1
assertFalse(isForCustomer1.apply(ORDER_2)); // ORDER_2 is for CUSTOMER_2

This example is simple but you get the idea. To test more complex logic, if testing each half of the feature is not enough you may create mock predicates, for example a predicate that returns true once, then always false later on. Forcing the predicate like that may considerably simplify your test set-up, thanks to the strict separation of concerns.

Predicates work so good for testing that if you tend to do some TDD, I mean if the way you can test influences the way you design, then as soon as you know predicates they will surely find their way into your design.

Explaining to the team

In the projects I’ve worked on, the team was not familiar with predicates at first. However this concept is easy and fun enough for everyone to get it quickly. In fact I’ve been surprised by how the idea of predicates spread naturally from the code I had written to the code of my colleagues, without much evangelism from me. I guess that the benefits of predicates speak for themselves. Having mature API’s from big names like Apache or Google also helps convince that it is serious stuff. And now with the functional programming hype, it should be even easier to sell!

Simple optimizations

This engine is so big, no optimization is required (Chicago Auto Show).

The usual optimizations are to make predicates immutable and stateless as much as possible to enable their sharing with no consideration of threading.  This enables using one single instance for the whole process (as a singleton, e.g. as static final constants). Most frequently used predicates that cannot be enumerated at compilation time may be cached at runtime if required. As usual, do it only if your profiler report really calls for it.

When possible a predicate object can pre-compute some of the calculations involved in its evaluation in its constructor (naturally thread-safe) or lazily.

A predicate is expected to be side-effect-free, in other words “read-only”: its execution should not cause any observable change to the system state. Some predicates must have some internal state, like a counter-based predicate used for paging, but they still must not change any state in the system they apply on. With internal state, they also cannot be shared, however they may be reused within their thread if they support reset between each successive use.

Fine-grained interfaces: a larger audience for your predicates

In large applications you find yourself writing very similar predicates for types totally different but that share a common property like being related to a Customer. For example in the administration page, you may want to filter logs by customer; in the CRM page you want to filter complaints by customer.

For each such type X you’d need yet another CustomerXPredicate to filter it by customer. But since each X is related to a customer in some way, we can factor that out (Extract Interface in Eclipse) into an interface CustomerSpecific with one method:

public interface CustomerSpecific {
   Customer getCustomer();

This fine-grained interface reminds me of traits in some languages, except it has no reusable implementation. It could also be seen as a way to introduce a touch of dynamic typing within statically typed languages, as it enables calling indifferently any object with a getCustomer() method. Of course our class PurchaseOrder now implements this interface.

Once we have this interface CustomerSpecific, we can define predicates on it rather than on each particular type as we did before. This helps leverage just a few predicates throughout a large project. In this case, the predicate CustomerPredicate is co-located with the interface CustomerSpecific it operates on, and it has a generic type CustomerSpecific:

public final class CustomerPredicate implements Predicate<CustomerSpecific>, CustomerSpecific {
  private final Customer customer;
  // valued constructor omitted for clarity
  public Customer getCustomer() {
    return customer;
  public boolean apply(CustomerSpecific specific) {
    return specific.getCustomer().equals(customer);

Notice that the predicate can itself implement the interface CustomerSpecific, hence could even evaluate itself!

When using trait-like interfaces like that, you must take care of the generics and change a bit the method that expects a Predicate<PurchaseOrder> in the class PurchaseOrders, so that it also accepts any predicate on a supertype of PurchaseOrder:

public Iterable<PurchaseOrder> selectOrders(Predicate<? super PurchaseOrder> condition) {
    return Iterables.filter(orders, condition);

Specification in Domain-Driven Design

Eric Evans and Martin Fowler wrote together the pattern Specification, which is clearly a predicate. Actually the word “predicate” is the word used in logic programming, and the pattern Specification was written to explain how we can borrow some of the power of logic programming into our object-oriented languages.

In the book Domain-Driven Design, Eric Evans details this pattern and gives several examples of Specifications which all express parts of the domain. Just like this book describes a Policy pattern that is nothing but the Strategy pattern when applied to the domain, in some sense the Specification pattern may be considered a version of predicate dedicated to the domain aspects, with the additional intent to clearly mark and identify the business rules.

As a remark, the method name suggested in the Specification pattern is: isSatisfiedBy(T): boolean, which emphasises a focus on the domain constraints. As we’ve seen before with predicates, atoms of business logic encapsulated into Specification objects can be recombined using boolean logic (or, and, not, any, all), as in the Interpreter pattern.

The book also describes some more advanced techniques such as optimization when querying a database or a repository, and subsumption.

Optimisations when querying

The following are optimization tricks, and I’m not sure you will ever need them. But this is true that predicates are quite dumb when it comes to filtering datasets: they must be evaluated on just each element in a set, which may cause performance problems for huge sets. If storing elements in a database and given a predicate, retrieving every element just to filter them one after another through the predicate does not sound exactly a right idea for large sets…

When you hit performance issues, you start the profiler and find the bottlenecks. Now if calling a predicate very often to filter elements out of a data structure is a bottleneck, then how do you fix that?

One way is to get rid of the full predicate thing, and to go back to hard-coded, more error-prone, repetitive and less-testable code. I always resist this approach as long as I can find better alternatives to optimize the predicates, and there are many.

First, have a deeper look at how the code is being used. In the spirit of Domain-Driven Design, looking at the domain for insights should be systematic whenever a question occurs.

Very often there are clear patterns of use in a system. Though statistical, they offer great opportunities for optimisation. For example in our PurchaseOrders class, retrieving every PENDING order may be used much more frequently than every other case, because that’s how it makes sense from a business perspective, in our imaginary example.

Friend Complicity

Weird complicity (Maeght foundation)

Based on the usage pattern you may code alternate implementations that are specifically optimised for it. In our example of pending orders being frequently queried, we would code an alternate implementation FastPurchaseOrder, that makes use of some pre-computed data structure to keep the pending orders ready for quick access.

Now, in order to benefit from this alternate implementation, you may be tempted to change its interface to add a dedicated method, e.g. selectPendingOrders(). Remember that before you only had a generic selectOrders(Predicate) method. Adding the extra method may be alright in some cases, but may raise several concerns: you must implement this extra method in every other implementation too, and the extra method may be too specific for a particular use-case hence may not fit well on the interface.

A trick for using the internal optimization through the exact same method that only expects predicates is just to make the implementation recognize the predicate it is related to. I call that “Friend Complicity“, in reference to the friend keyword in C++.

/** Optimization method: pre-computed list of pending orders */
private Iterable<PurchaseOrder> selectPendingOrders() {
  // ... optimized stuff...

public Iterable<PurchaseOrder> selectOrders(Predicate<? super PurchaseOrder> condition) {
  // internal complicity here: recognize friend class to enable optimization
  if (condition instanceof PendingOrderPredicate) {
     return selectPendingOrders();// faster way
  // otherwise, back to the usual case
  return Iterables.filter(orders, condition);

It’s clear that it increases the coupling between two implementation classes that should otherwise ignore each other. Also it only helps with performance if given the “friend” predicate directly, with no decorator or composite around.

What’s really important with Friend Complicity is to make sure that the behaviour of the method is never compromised, the contract of the interface must be met at all times, with or without the optimisation, it’s just that the performance improvement may happen, or not. Also keep in mind that you may want to switch back to the unoptimized implementation one day.


If the orders are actually stored in a database, then SQL can be used to query them quickly. By the way, you’ve probably noticed that the very concept of predicate is exactly what you put after the WHERE clause in a SQL query.

Ron Arad designed a chair that encompasses another chair: this is subsumption

A first and simple way to still use predicate yet improve performance is for some predicates to implement an additional interface SqlAware, with a method asSQL(): String that returns the exact SQL query corresponding for the evaluation of the predicate itself. When the predicate is used against a database-backed repository, the repository would call this method instead of the usual evaluate(Predicate) or apply(Predicate) method, and would then query the database with the returned query.

I call that approach SQL-compromised as the predicate is now polluted with database-specific details it should ignore more often than not.

Alternatives to using SQL directly include the use of stored procedures or named queries: the predicate has to provide the name of the query and all its parameters. Double-dispatch between the repository and the predicate passed to it is also an alternative: the repository calls the predicate on its additional method selectElements(this) that itself calls back the right pre-selection method findByState(state): Collection on the repository; the predicate then applies its own filtering on the returned set and returns the final filtered set.


Subsumption is a logic concept to express a relation of one concept that encompasses another, such as “red, green, and yellow are subsumed under the term color” (Merriam-Webster). Subsumption between predicates can be a very powerful concept to implement in your code.

Let’s take the example of an application that broadcasts stock quotes. When registering we must declare which quotes we are interested in observing. We can do that by simply passing a predicate on stocks that only evaluates true for the stocks we’re interested in:

public final class StockPredicate implements Predicate<String> {
   private final Set<String> tickers;
   // Constructors omitted for clarity

   public boolean apply(String ticker) {
     return tickers.contains(ticker);

Now we assume that the application already broadcasts standard sets of popular tickers on messaging topics, and each topic has its own predicates; if it could detect that the predicate we want to use is “included”, or subsumed in one of the standard predicates, we could just subscribe to it and save computation. In our case this subsumption is rather easy, by just adding an additional method on our predicates:

 public boolean encompasses(StockPredicate predicate) {
   return tickers.containsAll(predicate.tickers);
 }Subsumption is all about evaluating another predicate for "containment". This is easy when your predicates are based on sets, as in the example, or when they are based on intervals of numbers or dates. Otherwise You may have to resort to tricks similar to Friend Complicity, i.e. recognizing the other predicate to decide if it is subsumed or not, in a case-by-case fashion.

Overall, remember that subsumption is hard to implement in the general case, but even partial subsumption can be very valuable, so it is an important tool in your toolbox.


Predicates are fun, and can enhance both your code and the way you think about it!


The single source file for this part is available for download (fixed broken link)

Read More

A touch of functional style in plain Java with predicates – Part 1

You keep hearing about functional programming that is going to take over the world, and you are still stuck to plain Java? Fear not, since you can already add a touch of functional style into your daily Java. In addition, it’s fun, saves you many lines of code and leads to fewer bugs.

What is a predicate?

I actually fell in love with predicates when I first discovered Apache Commons Collections, long ago when I was coding in Java 1.4. A predicate in this API is nothing but a Java interface with only one method:

evaluate(Object object): boolean

That’s it, it just takes some object and returns true or false. A more recent equivalent of Apache Commons Collections is Google Guava, with an Apache License 2.0. It defines a Predicate interface with one single method using a generic parameter:

apply(T input): boolean

It is that simple. To use predicates in your application you just have to implement this interface with your own logic in its single method apply(something).

A simple example

As an early example, imagine you have a list orders of PurchaseOrder objects, each with a date, a Customer and a state. The various use-cases will probably require that you find out every order for this customer, or every pending, shipped or delivered order, or every order done since last hour.  Of course you can do that with foreach loops and a if inside, in that fashion:

//List<PurchaseOrder> orders...

public List<PurchaseOrder> listOrdersByCustomer(Customer customer) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (order.getCustomer().equals(customer)) {
  return selection;

And again for each case:

public List<PurchaseOrder> listRecentOrders(Date fromDate) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (order.getDate().after(fromDate)) {
  return selection;

The repetition is quite obvious: each method is the same except for the condition inside the if clause, emphasized in bold here. The idea of using predicates is simply to replace the hard-coded condition inside the if clause by a call to a predicate, which then becomes a parameter. This means you can write only one method, taking a predicate as a parameter, and you can still cover all your use-cases, and even already support use-cases you do not know yet:

public List<PurchaseOrder> listOrders(Predicate<PurchaseOrder> condition) {
  final List<PurchaseOrder> selection = new ArrayList<PurchaseOrder>();
  for (PurchaseOrder order : orders) {
    if (condition.apply(order)) {
  return selection;

Each particular predicate can be defined as a standalone class, if used at several places, or as an anonymous class:

final Customer customer = new Customer("BruceWaineCorp");
final Predicate<PurchaseOrder> condition = new Predicate<PurchaseOrder>() {
  public boolean apply(PurchaseOrder order) {
    return order.getCustomer().equals(customer);

Your friends that use real functional programming languages (Scala, Clojure, Haskell etc.) will comment that the code above is awfully verbose to do something very common, and I have to agree. However we are used to that verbosity in the Java syntax and we have powerful tools (auto-completion, refactoring) to accommodate it. And our projects probably cannot switch to another syntax overnight anyway.

Predicates are collections best friends

Didn't find any related picture, so here's an unrelated picture from my library

Coming back to our example, we wrote a foreach loop only once to cover every use-case, and we were happy with that factoring out. However your friends doing functional programming “for real” can still laugh at this loop you had to write yourself. Luckily, both API from Apache or Google also provide all the goodies you may expect, in particular a class similar to java.util.Collections, hence named Collections2 (not a very original name).

This class provides a method filter() that does something similar to what we had written before, so we can now rewrite our method with no loop at all:

public Collection<PurchaseOrder> selectOrders(Predicate<PurchaseOrder> condition) {
  return Collections2.filter(orders, condition);

In fact, this method returns a filtered view:

The returned collection is a live view of unfiltered (the input collection); changes to one affect the other.

This also means that less memory is used, since there is no actual copy from the initial collection unfiltered to the actual returned collection filtered.

On a similar approach, given an iterator, you could ask for a filtered iterator on top of it (Decorator pattern) that only gives you the elements selected by your predicate:

Iterator filteredIterator = Iterators.filter(unfilteredIterator, condition);

Since Java 5 the Iterable interface comes very handy for use with the foreach loop, so we’d prefer indeed use the following expression:

public Iterable<PurchaseOrder> selectOrders(Predicate<PurchaseOrder> condition) {
  return Iterables.filter(orders, condition);

// you can directly use it in a foreach loop, and it reads well:
for (PurchaseOrder order : orders.selectOrders(condition)) {

Ready-made predicates

To use predicates, you could simply define your own interface Predicate, or one for each type parameter you need in your application. This is possible, however the good thing in using a standard Predicate interface from an API such as Guava or Commons Collections is that the API brings plenty of excellent building blocks to combine with your own predicate implementations.

First you may not even have to implement your own predicate at all. If all you need is a condition on whether an object is equal to another, or is not-null, then you can simply ask for the predicate:

// gives you a predicate that checks if an integer is zero
Predicate<Integer> isZero = Predicates.equalTo(0);
// gives a predicate that checks for non null objects
Predicate<String> isNotNull = Predicates.notNull();
// gives a predicate that checks for objects that are instanceof the given Class
Predicate<Object> isString = Predicates.instanceOf(String.class);

Given a predicate, you can inverse it (true becomes false and the other way round):


Combine several predicates using boolean operators AND, OR:

Predicates.and(predicate1, predicate2);
Predicates.or(predicate1, predicate2);
// gives you a predicate that checks for either zero or null
Predicate<Integer> isNullOrZero = Predicates.or(isZero, Predicates.isNull());

Of course you also have the special predicates that always return true or false, which are really, really useful, as we’ll see later for testing:


Where to locate your predicates

I often used to make anonymous predicates at first, however they always ended up being used more often so were often promoted to actual classes, nested or not.

By the way, where to locate these predicates? Following Robert C. Martin and his Common Closure Principle (CCP) :

Classes that change together, belong together

Because predicates manipulates objects of a certain type, I like to co-locate them close to the type they take as parameter. For example, the classes CustomerOrderPredicate, PendingOrderPredicate and RecentOrderPredicate should reside in the same package than the class PurchaseOrder that they evaluate, or in a sub-package if you have a lot of them. Another option would be to define them nested within the type itself. Obviously, the predicates are quite coupled to the objects they operate on.


Here are the source files for the examples in this article: cyriux_predicates_part1 (zip)

In the next part, we’ll have a look at how predicates simplify testing, how they relate to Specifications in Domain-Driven Design, and some additional stuff to get the best out of your predicates.

Read More

Toward smarter dependency constraints (patterns to the rescue)

Low coupling between objects is a key principle to help you win the battle against software entropy. Making sure your dependencies are under control matters. Several tools can enforce dependencies restrictions, such as JDepend. However in a real project with many classes, packages and modules, the real issue is how to decide and configure the allowed and forbidden dependencies. Per class? Per package? Per Module? Based on gut feeling? Is there a theory for that?

Of course, in a layered architecture, the layers specify the dependencies. This is not bad, but I am sure we can do better.

Smarter dependencies

To go further, I suggest expanding our vocabulary of concepts. In OO languages such as Java, everything is a class (or an interface), grouped into packages. Such classification is not really helpful. Fortunately, several books provide ready to use vocabularies in the form of patterns languages (not only design patterns, but patterns in general). Some of these patterns are foundations on which rules to manage dependencies can be proposed.

Disclaimer: the dependencies rules suggested below are hypothesises to be debated and verified against a corpus of actual projects, I would be happy to be given counter-examples and counter arguments.

The child really depend upon the mother
The child really depend upon the mother

Domain Driven Design

The book Domain Driven Design by Eric Evans defines a rich vocabulary of concepts used in every application, and we can leverage that vocabulary to propose some dependencies principles between them:

  • ValueObject never depends upon Entity nor Services
  • Entities should not depend upon Services (maybe not a hard rule)
  • Generic SubDomain should not depend upon Core Domain
  • Core Domain should not depend upon Cohesive Mechanism (the “What” should not depend upon the “How”)
  • Domain Layer should not depend on any infrastructure code
  • Abstract Core module never depends on its specialized or implementation modules

Analysis Patterns

The book Analysis Patterns by Martin Fowler also provides patterns as a richer vocabulary, from which we could propose:

  • Elements from a Knowledge Level should not depend upon elements from the corresponding Operation Level

I did not find that rule written in the book but every example appears to support it. Considering that classes and subclasses in usual OOP are a special case of Knowledge Level built-into the language, this would lead to:

  • Abstraction never depends upon their Implementations

which is similar to the second part of the Dependency inversion principle by Robert C. Martin:

Abstractions should not depend upon details. Details should depend upon abstractions.

Since many analysis patterns in the Analysis Patterns book involve the Knowledge Level pattern, this single dependency rule already applies to many analysis patterns: Party Type Generalizations, Measurement, Observation, Protocol, Associated Observation, Measurement Protocol etc. The pattern Quantity can be seen as a specialized ValueObject (see Domain Driven Design above) hence should also not depend on any Entity nor Service.

Design Patterns

The book Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma et al. presents the classic design patterns. These patterns define participants which are named.  In the pattern participant ignorance principle I discussed the concepts of ignorant Vs. dedicated participants within a pattern, and their consequences for dependencies:

  • Ignorant pattern participants should never depend on dedicated participants
  • Pattern participant never depend on the “Client” participant
  • For each ConcreteX participant, the corresponding abstract X never depends on it (Abstraction never depends upon their Implementations)

In practice, this means:

  • In the Adapter pattern, the Adaptee should not depend upon the Adapter, and the Target should depend upon nothing
  • In the Facade pattern, the sub systems should not depend upon the Facade
  • In the Iterator pattern, the Aggregate should not depend upon the Iterator; however every Java collection is a counter example as they contain their own ConcreteIterator.
  • In creational patterns (Abstract Factory, Prototype, Builder etc.), the Product and ConcreteProduct should not depend on the dedicated participant that does the allocation (the Factory etc.)
  • And so on for other patterns, some of which being already discussed in the pattern participant ignorance principle.

In short, if we look at the design patterns as a set of types with inheritance/implementation, invocation/delegation and creation relationships between them, the dependencies should not flow in the reverse direction of the relationships; in other words, using UML arrows, the dependencies should only be allowed in the direction of the arrows.

Addiction to sugar is a kind of dependency
Addiction to sugar is a kind of dependency

Patterns of Enterprise Architecture

In the book Patterns of Enterprise Application Architecture by Martin Fowler, the Separated Interface Pattern proposes a way to manage dependencies by defining the interface between packages in a separate package from its implementation. This is similar to the Dependency inversion principle, as discussed here, which states:

A. High-level modules should not depend upon low-level modules. Both should depend upon abstractions.

By the way this is also very similar to the recommendation in Domain Driven Design:

Abstract Core module never depends on its specialized or implementation modules.

Finally, in the spirit of UML stereotypes that we sometimes put on packages to express their intent:

  • Utils never depends on anything but other Utils

What for?

If we manage to make every use of the above pattern explicit in the source code, for instance using Java annotations or simply Javadoc tags, then it becomes possible for a tool to deduce dependencies constraints and automatically enforce them.

Imagine, just add @pattern ValueObject in your Javadoc comment, and voila! A tool is now able to deduce that if you happen to import anything but basic java.* you must be warned.

Of course the fine tuning of the default behavior can take some time (do we accept that ValueObjects may depend upon low level utils like StringUtils? Probably yes), but the result will at least be stable regardless of the refactorings.

Given the existing variety of patterns over there, I am confident that just any class or interface within a project can be declared as being a participant in at least one pattern, and have therefore its dependency constraints deduced at the same time.

Read More

Knobs pagination in Arduino

Here is an example of how to use the same knobs (e-g. 6 knobs easy to connect to the 6 Arduino analog inputs) several times to adjust several parameters spread over several “pages”.

This enables to “multiplex” the same knobs many times, in a safely fashion thanks to the protection mecanism:  after changing the active page, every knob is in protected mode (turning the knob does not change the value of the parameter) not to force a sudden jump of value. On turning a knob, the LED lights on when the knob’s value matches the stored value, and then the knob becomes fully active (at least till the next page switch).

This mecanism is inspired and similar to that of the microKorg synth edit knobs. As they say about it in Wikipedia:

“In the Edit mode, however, every knob makes the LED panel display the current value associated with the knob position. To change the current value of a given parameter, the user must pass through the original value before being able to modify anything. When one that original value, the Original Value LED will light on, and the value displayed on the LED panel will stop flashing. This avoids the user from passing from a small to a high value immediately, so there’s no big margin in the change of the parameters (a useful function for live performances).”
The microKorg Edit panel, with 5 knobs and 22 pages = 120 parameters to control
The microKorg Edit panel, with 5 knobs and 22 pages = 120 parameters to control

In Arduino

Console output printing the 4 pages of 6 parameters and the currently active page
Console output printing the 4 pages of 6 parameters and the currently active page

To do this in Arduino is not very difficult.We first need a 2-dimension array to store the value of each parameter:

// the permanent storage of every value for every page, used by the actual application code
int pageValues[PAGE_NB][KNOB_NB];

We also need an array to store the state of each knob: whether it is PROTECTED or ACTIVE, and yet another array to keep track of the value of each knob in the previous loop, in order to detect when a knob is being turned:

// last read knob values
int knobsValues[KNOB_NB];
// knobs state (protected, enable...)
int knobsStates[KNOB_NB];

Then we begin to read the digital switches to select the current active page. In case the selected page has changed, every knob has its state set to PROTECTED. We then read the analog value for each knob, detect changes, find out when the knob value is in sync with the stored value for the parameter to light the LED and set its state to ACTIVE.

Only when the state is set to ACTIVE we copy the current value of the knob to the actual parameter stored for the current page.

In my experiment I have 4 digital buttons connected to digital inputs 8 to 11, and 6 knobs (pots) connected to the 6 analog inputs:

The Arduino board, the 4 page buttons and the 5 + 1 knobs and fader
The Arduino board, the 4 page buttons, the 5 + 1 knobs and fader and the LED

Here is the full code below:

 * Handles a pagination mecanism, each page can use the same knobs;
 * Digital switches select the current active page.
 * This enables to "multiplex" the same knobs many times, safely thanks to the protection mecanism.
 * After changing the active page, every knob is protected, not to force a jump in value.
 * On turning a knob the LED lights up when the knob's value matches the stored value, and then
 * the knob becomes active till next page switch.
 * This mecanism is inspired and similar to that of the microKorg synth edit knobs.
 * Copyleft cyrille martraire

//---------- USER INPUT AND PAGINATION -----------
#define PAGE_NB 4
#define KNOB_NB 6

#define PROTECTED -1
#define ACTIVE 1

#define SYNC_LED 12

// the permanent storage of every value for every page, used by the actual music code
int pageValues[PAGE_NB][KNOB_NB];

// last read knob values
int knobsValues[KNOB_NB];
// knobs state (protected, enable...)
int knobsStates[KNOB_NB];
// current (temp) value just read
int value = 0;
// the current page id of values being edited
int currentPage = 0;
// signals the page change
boolean pageChange = false;
//temp variable to detect when the knob's value matches the stored value
boolean inSync = false;

void setup() {
  pinMode(13, OUTPUT);



void setupPagination(){
  pinMode(SYNC_LED, OUTPUT);
  for(int i=0; i < KNOB_NB; i++){
    knobsValues[i] = analogRead(i);
    knobsStates[i] = ACTIVE;

// read knobs and digital switches and handle pagination
void poolInputWithPagination(){
  // read page selection buttons
     value = digitalRead(i);
     if(value == LOW){
         pageChange = true;
         currentPage = i - FIRST_PAGE_BUTTON;
  // if page has changed then protect knobs (unfrequent)
    pageChange = false;
    digitalWrite(SYNC_LED, LOW);
    for(int i=0; i < KNOB_NB; i++){
      knobsStates[i] = PROTECTED;
  // read knobs values, show sync with the LED, enable knob when it matches the stored value
  for(int i = 0;i < KNOB_NB; i++){
     value = analogRead(i);
     inSync = abs(value - pageValues[currentPage][i]) < 20;

     // enable knob when it matches the stored value
        knobsStates[i] = ACTIVE;

     // if knob is moving, show if it's active or not
     if(abs(value - knobsValues[i]) > 5){
          // if knob is active, blink LED
          if(knobsStates[i] == ACTIVE){
            digitalWrite(SYNC_LED, HIGH);
          } else {
            digitalWrite(SYNC_LED, LOW);
     knobsValues[i] = value;

     // if enabled then miror the real time knob value
     if(knobsStates[i] == ACTIVE){
        pageValues[currentPage][i] = value;

void loop() {

void printAll(){
     Serial.print("page ");

     //printArray(knobsValues, 6);
     //printArray(knobsStates, 6);

     for(int i = 0; i < 4; i++){
       printArray(pageValues[i], 6);

void printArray(int *array, int len){
  for(int i = 0;i< len;i++){
       Serial.print(" ");

Read More

Geometric Rhythm Machine

In the post Playing with laser beams to create very simple rhythms” I explained a theoretical approach that I want to materialize into an instrument. The idea is to create complex rhythms by combining several times the same rhythmic patterns, but each time with some variation compared to the original pattern.

Several possible variations (or transforms, since a variation is generated by applying a transform to the original pattern) were proposed, starting from an hypothetical rhythmic pattern “x.x.xx..”. Three linear transforms: Reverse (”..xx.x.x”), Roll (”x.x.xx..”) and Scale 2:1 (”x…x…x.x…..”) or 1:2 (”xxx.”), and some non-linear transforms: Truncate (”xx..”) etc.

Geometry + Light = Tangible transforms

The very idea behind the various experiments made using laser light beams and LDR sensors is to build an instrument that proposes all the previous transforms in a tangible fashion: when you move physical object, you also change the rhythm accordingly.

Let’s consider a very narrow light beam turning just like the hands of a clock. Let’s suppose that our clock has no number written around, but we can position marks (mirrors) wherever on the clock surface. Still in the context of creating rhythms, now assume that every time a hand crosses a mark (mirror) we trigger a sound. So far we have a rhythmic clock, which is a funny instrument already. But we can do better…

Going back to our rhythmic pattern “x.x.xx..”, we can represent it with 4 mirrors that we position on a same circle. On the illustration below this original pattern is displayed in orange, each mirror shown by an X letter.. If we now link these 4 mirrors together with some adhesive tape, we have built a physical object that represents a rhythmic pattern. The rotating red line represents the laser beam turning like the clock hands.

Illustration of how the geometry of the rhythmic clock physically represents the transforms
Illustration of how the geometry of the rhythmic clock physically represents the transforms

Now we have a physical pattern (the original one), we can of course create copies of it (need more mirrors and more adhesive tape). We can then position the copies elsewhere on the clock. The point is that where you put a copy defines the transform that applies to it compared with the original pattern:

  • If we just shift a pattern left or right while remaining on the same circle, then we are actually doing the Roll transform (change the phase of the pattern) (example in blue on the picture)
  • If we reverse the adhesive tape with its mirrors glued on, then of course we also apply the Reverse transform to the pattern (example in grey on the picture)
  • If we move the pattern to another (concentric) circle, then we are actually applying the Scale transform, where the scaling coefficient is the fraction of the radius of the circle over the radius of the circle of the original pattern (example in green on the picture)

Therefore, simple polar geometry is enough to provide a complete set of physical operations that fully mimic the desired operations on rhythmic pattern. And since this geometry is in deep relationship with how the rhythm is being made, the musician can understand what’s happening and how to place the mirrors to get any the desired result. The system is controllable.

To apply the Truncate transform (that is not a linear transform) we can just stick a piece of black paper to hide the mirror(s) we want to mute.

If we layer the clock we just described, with one layer for each timbre to play, then again changing the timbre (yet another non-linear transform) can then be done by physically moving the pattern (mirrors on adhesive tape) across the layers.

From theory to practice

Second prototype, with big accuracy issues
Second prototype, with big accuracy issues

Though appealing in principle, this geometric approach is hard to implement into a physical installation, mostly because accuracy issues:

  1. The rotating mirror must rotate perfectly, with no axis deviation; any angular deviation is multiplied by two and then leads to important position deviation in the light spot in the distance: delta = distance . tan(2 deviation angle)
  2. Each laser module is already not very accurate: the light beam is not always perfectly aligned with the body. To fix that would require careful tilt and pan adjustment on the laser module support
  3. Positioning the retroreflectors in a way that is accurate and easy to add, move or remove at the same time is not that easy; furthermore, even if in theory the retroreflectors reflect all the incoming light back to where it comes from, in practice maximum reflectance happens when the light hits the reflector orthogonally, which is useful to prevent missed hits

Don’t hesitate to check these pages for progress, and any feedback much appreciated.

Read More

Musical instruments for musicians and non-musicians: Put into practice

Other posts in the series Musical instruments for musicians and non-musicians:

  1. Part One: Controls: Analysing how continuous or discrete controls on the sound affect playability to great extent
  2. Part Two: Constraints: How embedding musical theory as constraints makes the instrument easier and more rewarding
  3. Part Three: Exotic examples: Examples of exotic instruments and how they achieve good or not so good results
  4. Part Four: Put into practice: Let’s put theory into practice to build an easy and musically-sounding Theremin

Now we want to put that into practice to build a musical instrument. Let’s consider we want to do something inspired by the Theremin, but simpler to play and more funny to look at while easier to build as well.


Here is now a case study: We want to build an instrument that must be:

  1. Playable by friends that know very little in music: in other words really easy to play
  2. Attractive for friends that enjoy the fun of playing and jamming together: must be expressive
  3. Suitable for groovy electronic music (house, hip-hop, electro, acid-jazz, trip-hop and a bit of techno)
  4. Able to participate within a small band among other instruments, mostly electronic machines
  5. With a funky look and visually attractive when performing

Design and justification

The finished instrument
The finished instrument

Based on these requirements, and after some trials and error we go for the following:

  1. Finger physical guidance, because it is too hard to keep hands in the air at the same position (req. 1)
  2. Bright leds plus light sensor as a primary way of playing, for the funky look and visually attractive performance (req. 5)
  3. Discrete pitch as primary way of playing, with pitch quantization against a pentatonic scale, easy, good sounding (but coupled to one fixed tonality unless putting configuration buttons) (req. 1)
  4. Expression on the pitch when a note is pressed send pitch bend events to modulate the pitch as on a guitar  or real theremin; this only happen after a note is pressed, not to conflict with the primary pentatonic scale (req. 2)
  5. Allows for additional expression using a distinct light sensor mapped to a Midi Continuous Controller (controller 1: modulation wheel) (req. 2)
  6. Continuous rhythm control to start the project simply, plan to quantize it on 16th notes according to a midi clock later (tradeoff to keep simple right now, should be even simpler due to req. 1)
  7. MIDI controller rather than integrated synthesizer to allow for very good sounds generated from external professional synthesizers (req. 3)
  8. Internal scale setup within the range between C3 and C5, to dedicate the instrument to play solo on top of an electronic rhythm (req. 4)
  9. Arduino implementation (easy to implement pitch quantization logic and expression controls)


The very simple circuit
The very simple circuit
The Arduino board
The Arduino board
  • An aluminium rail (from a DIY shop) is fixed to an empty salt box as the body (hence the name “Salty Solo”)
  • A LED (from Booxt) slides on the rail
  • The main sensor and the expression sensor are two simple LDRs connected through a voltage divider to two Arduino analog inputs
  • Two buttons are simply connected to two Arduino digital inputs.
  • The MIDI out DIN plug is connected to the Arduino Tx pin.
  • The rest is in the Arduino software!

I have decorated the salt box with adhesive stuff…

Playability and fixes

At first try, playing the Salty Solo is not that easy! A few problems happen:

  1. Reaction time (or “latency”) is not good
  2. Moving the light with the left hand is not very fast, hence impedes playing a melody that sounds like one.
  3. Also there is a kind of conflict between the note quantization that does a rounding to the closest accurate note, and the expression control that allows to “bend” notes.

The first problem has been solved by increasing the Arduino loop frequency down to 100Hz (wait period = 10ms); to prevent sending MIDI continuous controller and MIDI pitch bend too often we therefore needed to send them (when needed) once out of a few loops.

For the second problem a workaround has been done by using the second button to trigger the next note in the scale, and pressing both buttons together triggers the second next note in the scale. Basically, in the (almost) pentatonic scale we use this usually means jumping to the third or to the fifth. This kind of jump is very common when playing guitar or bass guitar thanks to their multiple strings, and it does help play faster while moving the left hand much less. With two buttons like this it is much easier to play melodic fragments.

The last problem has been solved a bit randomly: because of a bug in the code, the pitch bend only happen when on the highest note: this saves the most useful case of playing the guitar hero on a very high note while bending it a lot. On the other notes, sliding the left hand descend or ascend the scale instead. Further playing and external opinions will probably help tweak this behaviour over time.

Here is the Arduino source code for Salty Solo

Here is a video to illustrate how it works…

Experimental instrument for musicians and non-musicians from cyrille martraire on Vimeo.


Other CC that could be used (however the modulation wheel can also control the same parameters, on a program by program basis on a synth):

  • 1 Modulation Wheel or Joystick (positive polarity) (MSB)  Can be effectively remapped to other controllers on some synth
  • 7 Volume (MSB)  If you re-route to Controller 7, your software mixer will mess up
  • 71 Resonance (aka  Timbre)
  • 74 Frequency Cutoff (aka  Brightness )

And the panic button: 123 All Notes Off

Read More

Musical instruments for musicians and non-musicians: Constraints

Other posts in the series Musical instruments for musicians and non-musicians:

  1. Part One: Controls: Analysing how continuous or discrete controls on the sound affect playability to great extent
  2. Part Two: Constraints: How embedding musical theory as constraints makes the instrument easier and more rewarding
  3. Part Three: Exotic examples: Examples of exotic instruments and how they achieve good or not so good results
  4. Part Four: Put into practice: Let’s put theory into practice to build an easy and musically-sounding Theremin

How can we design musical instruments that both musicians and non-musicians can play and enjoy?

In the previous part of this series, we stated that a musical instrument must provide “A way for musicians and non-musicians to make music easily: musical assistance“.

We will focus on that point in this post.

In the previous post, when discussing the various controls an instrument provide to make music, we already noted that discrete controls were easier to use, since they automatically enforce to play in tune or in rhythm; this was already a simple case of musical assistance.

The ideal instrument

Out of every possible arrangement of sounds, very few can be considered music. Therefore, a tool that could make every possible musical sounds and no non-musical sound would be the ideal instrument.

Such an instrument would have to be very smart and understand what music is. It would have to be playable as well. This instrument probably cannot exist, but designing an instrument is about trying to go there.

Empower the tools so that they can empower the user: towards instruments that always play in tune

The more we can understand what music is, and more specifically what it is not, the more we can embed that understanding into the instrument so that it can assist the player by preventing non-musical events. Preventing non-musical sounds helps the non-musician, while loosing no freedom (or very little) for the expert musician.

To illustrate that approach, let us consider a simplified musical system made of an infinity of notes. If we know that we want to play Western music, we know we can reduce our notes set down to 12; if we know we want to play in the Blues genre, then we know we will only use 6 notes out of this 12 notes set; if we know that our song is in minor A then we know which 6 notes we will only need. Going further, if we know we want to play a walking bass line, we might end up with only 3 playable notes: the task of playing has become much simpler!

This idea already exist de facto in the form of the “black keys only” trick: the subset of black keys on the piano keyboard forms a pentatonic scale (a scale with only 5 notes) that sounds beautiful in whatever combination you play them:

Improvising on the Black Piano Keys — powered by

With this approach in mind, let’s now have a look at what music is, and more specifically what are its constraints on how to arrange sounds together.

Musical constraints

Music aims at providing aesthetic pleasure by the mean of organised sounds. This typically imposes constraints on what can be played.


Music (more precisely musical pleasure) happens when enough surprise meets enough expectation at the peak of the Wundt curve (shown at left): expectation is typically caused by conventional constraints (e-g. western music conventions) and internal consistency of the piece of music that enable the listener to expect what will happen next; on the other hand, surprised is caused by little deviations against the expectations to create interesting tension.

In short, too much expectation means boring music, whereas too much surprise means just noise.

Musical constraints

In almost every popular genre of music, conventional constraints requires adhesion to a melodic scale (e-g. major or minor etc.), to a rhythmic meter (e-g. 4/4, or 6/8 etc.), and almost always to a tonality. These constraints represent a “background grid” on top of which a given piece of music will draw a specific motif.

Layers of constraints
Layers of constraints

On top of these conventional constraints, a piece of music also exhibits an internal consistency. This often happens through some form of repetition between its component parts (think about the fugue, or the verse-chorus-verse-chorus repeated structure, or the obvious theme repetition in classical music).  These constraints are usually written into the musical score, or into the simpler theme and chord chart in Jazz notation.

Performance freedom

Musical performance, for instance on a concert stage, is therefore all about the remaining freedom you have within these constraints. When playing a fully written score this freedom is reduced to the articulations between the notes, playing louder or softer, or playing a little faster or slower. In jazz the freedom is bigger, as long as you more or less follow the scale, tonality, chords and rhythm of the song: this is called improvisation. Improvisation on the saxophone or most classical instruments requires being trained to use only the subset of notes that are suited for each song (“practice your scales”) .

Of course, if a player excessively uses his available freedom he runs the risk to loose the audience, and listeners might consider it is noise, not music. On the other hand, if played too perfectly or “mechanically” they might get bored.

Layers of constraints

We can consider the above constraints as layers of constraints on top of each other (see picture): at the top, the category “pleasant music” defines aesthetic constraints (if one wants to play unpleasant music then the freedom is bigger and there will be less constraints). Below this layer, western music brings its constraints (e-g. meter, tempered scale, tonality), then each genre adds its constraints (tempo range, timbres, rhythmic patterns etc.), and at the bottom layer, each piece of music finally defines the most restrictive constraints in the form of the written score, chord chart etc.

For a very deep and rigorous exposé on this topic, I recommend the excellent book How Music REALLY Works which is full of precious insights.


The more we can embody these constraints into the instrument, the easier it will be to play. As an example, if our constraints impose that we can only play 6 different notes, an instrument that enables to play 12 different notes is not helpful: we run the risk to play 6 notes that will be “out of tune”! The ideal instrument should only propose the right 6 notes.

Harmonic table keyboard

The new C-Thru Harmonic Table keyboard (USB)

The new C-Thru Harmonic Table keyboard (USB)

If we want to play chords, or arpeggios, the usual piano keyboard layout is not the most straightforward one, because you have to learn the fingering for each chord.

For that particular purpose, the keyboard can be improved, and people have done just that: it is called the Harmonic Table layout.

Here is an article that explains more about it, and here is the website (also with explanations) of the company C-Thru that is launching such a commercial keyboard at the moment.

The beauty of this keyboard is that every chord of the same type has the very same fingering, as opposed to the piano keyboard:

Same fingering for every chord of the same type (picture from C-Thru)
Same fingering for every chord of the same type (picture from C-Thru)


In some musical genres, such as salsa, waltz, bossa-nova etc. there is a very strong rhythmic pattern that constraints the song very much, especially for the accompanying instruments.

Manufacturers have long embedded these constraints in the form of an auto-accompaniment feature built-in the popular electronic keyboards. The rhythmic pattern for many genres are stored into memory, and when you play a chord on the left part of the keyboard, the chord is not played according to your finger hitting the keyboard but according to the chosen rhythmic pattern. This enables many beginners or very amateur musicians to play complex accompaniment and melody easily. The same system can also play predefined bass lines, also stored in memory.

Going further, some electronic keyboards have also added shortcuts for the most common chords types: you only have to press one note to trigger the major triad chord, and if you want the minor triad chord you just press the next black key to trigger it, etc. This is called “auto chord” on my almost toy Yamaha SHS10.

A Yamaha electronic keyboard with auto-accompaniment and auto-chord features
A Yamaha electronic keyboard with auto-accompaniment and auto-chord features

However, this kind of accompaniment does indeed restrict the music space severely, and therefore the result is very quickly very boring. Auto accompaniment is now considered “infamous”, extremely “kitsch“, and very amateur-sounding. But this is not because this idea has been pushed too far that it is a bad idea in itself or forever…

Samplers and Digital Audio Workstations

Though classical instruments play single sounds (usually single notes or single chords), samplers, loopers and groove boxes can trigger full musical phrases at the touch of a button. This in turn can be played in the rhythm, or quantized, as described in the previous post. Here the idea is to have musical constraints already buit-in into musical building blocks, waiting to be used together.

Going further, DJs play complete records in the rhythm of the previous record (beatmaching), and increasingly take care of the records harmony (harmonic mixing): they actually build a long-term musical piece, with dramatic progression from opening up to a climax, rest etc. In this respect such a DJ mixing session or “playlist” can be compared with a symphony, except that the DJ is using full ready-made records instead of writing raw musical lines for each instrument of the orchestra.

Though not really “live” instruments, recent software applications for wannabe music producers such as Garage Band or Magix Music Maker and to some extent many other more professional software Digital Audio Workstations (DAW), have taken the approach of providing huge libraries of ready made music chunks. From full drum loops to 2-bars-long riffs of guitar, brass, piano or synthesizer to complete synth melodies and even pieces of vocals, you can create music without playing any music at all in the first place.

A very important aspect is that these chunks are also very well prepared to be assorted together (same key or automatically adjustable key, same tempo or automatically adjustable tempo, pre-mastered), therefore whatever combination of these chunks you choose, it cannot sound bad!

Again we can consider that this later approach embeds the knowledge from music theory that a song has one tempo, that must be shared by every musical phrase, and has one tonality, that must also be shared by every musical phrase.

When creating a new project you must set the tempo and tonality for the whole song:

Startup settings for new project in Garage Band
Startup settings for new project in Garage Band

Then you can navigate the available loops and musical fragments; whenever you choose one it will be adjusted to fit the song’s tempo and key.

Garage Band Loops browser
Garage Band Loops browser

Just like auto accompaniment, this idea results in good-sounding but often uninspired music (“Love in This Club” by Usher, US number one hit has however been produced using three ready made loops from Logic Pro 8, which can be considered the very professional version of Garage Band, as shown here). Again, this approach enables a lot of people to have fun doing music easily with a good reward.


Instruments that are more musically-aware can use their knowledge of music theory to assist human players more: they can prevent hitting a note that would be out of tune, they can correct the timing, enforce the key, tempo etc.

Instruments that trigger bigger chunks of music such as loops and readymade musical phrases (e-g. samplers, groove boxes etc.) can be considered the most attractive for non musicians. Playing a musical chunk or a record does yield instant gratification, more than with most other instruments; however making something really musical out of that still requires training and practice.

The key issue is therefore to find a balance between the amount of knowledge the instrument must embedd versus the remaining freedom to express emotions and to try unconventional ideas, in other words to be really creative. The problem with musical constraints is that even if they apply almost always, there is always a case when we must violate them!

A note on visual feedback

Just like cyclists do not look at the bicycle when they ride, musicians hardly look at their instrument when they play. Instead they look at the director, at their score, or at their band mates to shares visual hints of what is going to happen next. When being taught a musical instrument, not looking at ones fingers is one of the first advises to be told.

Instruments have to provide feedback, and almost always this feedback is what you hear.

However in a live concert or performance people in the audience really expect to see somethig happening, hence there is a clear need for a visual show.

Visual excitement for the audience happens when:

  • We can see the instruments: the view of an instrument can already be exciting
  • There is a physical involvement from the player(s), they must do something and be busy enough doing visible things, gestures, attitude, dancing etc. On the other hand, a guy in front of a computer, clicking the mouse from time to time, is not very attractive.
  • There is enough correlation between the gesture and attitude of the player and the music. If you can see the singer lips doing the sound you hear, the shoulders of the pianist moving when he tries to reach higher notes that you also hear, the drummer hitting the drums that you hear, then the visual stimuli and the music combine together to create additional excitement.
  • The player is having fun: having fun on stage is a great way to excite an audience!

Read More

Musical instruments for musicians and non-musicians: Controls

Other posts in the series Musical instruments for musicians and non-musicians:

  1. Part One: Controls: Analysing how continuous or discrete controls on the sound affect playability to great extent
  2. Part Two: Constraints: How embedding musical theory as constraints makes the instrument easier and more rewarding
  3. Part Three: Exotic examples: Examples of exotic instruments and how they achieve good or not so good results
  4. Part Four: Put into practice: Let’s put theory into practice to build an easy and musically-sounding Theremin

How can we design musical instruments that both musicians and non-musicians can play and enjoy?

“A musical instrument is an object constructed or used for the purpose of making music. In principle, anything that produces sound can serve as a musical instrument.” (Wikipedia)

What is a musical instrument?

In practice, nobody hardly calls a hammer a musical instrument, and hardly anybody considers an iPod as a musical instrument either. These two objects are missing something essential: the hammer is missing a way to assist the player in making musical sounds, whereas the iPod is lacking critical control to shape and alter the music in real-time.

Intuitively, musical instruments are tools to make music that are convenient for that purpose, and that provide enough live control to the human player.

Simple claves
Claves, perhaps the simplest instrument

Therefore, to design a musical instrument, we need to address:

  1. A way to generate sounds (every way to originate sounds is classified here); we won’t discuss that point in this post
  2. A way for musicians and non-musicians to make music easily, musical assistance (we will discuss this point partially in this post)
  3. A way for musicians and non-musicians to control the music being made: plenty of control (we will discuss this point in detail in this post)

For non musicians, it is obvious we have to address point 2 (musical assistance) more, whereas for musicians point 3 (plenty of control) will probably be the most important one, otherwise they will be frustrated.

Of course the later two points are conflicting, so the design of a musical instrument will require finding a balance between them.

Musical controls: Playability Vs. Expressiveness

“Playability is a term in video gaming jargon that is used to describe the ease by which the game can be played” (Wikipedia).

Here we will say an instrument has a good playability if it is easy to play, by musicians and especially by non-musicians.

Music is made of sounds that share essential attributes (the list is far from complete):

  • Pitch: this represents how “high” or “low” a sound is. Women typically sing at a higher pitch than men
  • Rhythm: this represents how the sounds are distributed over the metrical grid, in other words the pattern of sounds and silences
  • Articulation (the transition or continuity between multiple notes or sounds, e-g. linked notes -legato- or well separated notes)
  • Tempo: this represents the overall “speed” of the music
  • Dynamics: this represents how “loud” or “soft” a sound is
  • Timbre: this is the “colour” of the sound, as in “a piano has a different timbre than the saxophone”; we can sometime talk about “brighter” or “duller” timbres, etc.
  • Special effects and Modulations: electronic effects are endless, and so are modulations; most well-known modulations are the vibrato of the singer, or the special way to “slide” the start of notes on the saxophone.

Pitch and Rhythm are by far the primary controls, and the playability of an instrument is strongly linked to them: an instrument that is difficult to control in pitch or rhythm will surely be particularly difficult.

On top of pitch and rhythm, every musician, experienced or not, demand enough other controls to be as expressive as possible

Continuous pitch Vs. discrete pitch

Continuous pitch instruments can produce sounds of any pitch (lower or higher note) continuously, such as the violin, cello, theremin or the human voice (and also the fretless bass guitar).

Discrete pitch instruments are represented by the piano, flute, trumpet, the usual guitar, every instrument with a keyboard or with no frets, valves, or explicit finger positions.

continuous pitch instrument, particularly hard to play
continuous pitch instrument by Feromil, particularly hard to play

Because discrete pitch instruments already enforce automatically that every note played must belong to at least a scale (usually a chromatic scale), they are considered easier to learn and to play as opposed to continuous pitch instruments, where the player must find out the notes a priori using his/her ears alone (although after a deep training it will become instant habit).

Discrete pitch instruments have a better playability than continuous pitch instruments.

The chromatic scale contains 12 notes, out of which the (main) tonality of a song usually only uses 5 to 7 notes (diatonic, pentatonic or blues scales). It is usually up to the player to practice intensively to be able to play in any such sub scale without thinking about it, but one could imagine a simpler to play instrument that could be configured to only propose the notes that you can play with within a song (to be rigorous we should then deal with the problem of modulation, but it would bring us too far).

In contemporary music, instruments are sometimes expected to support microtonal scales, i.e. much more than 12 notes.

The great advantage with continuous pitch instruments is that they offer extreme control on the pitch and its variations: one can create a vibrato on a violin by quickly wobbling the finger on the fingerboard, or create a portamento by moving slowly between notes. The counterpart for that control is the huge training effort required.

Some instruments with a fixed pitch (simple drums, triangle, claves, wood sticks etc.) are obviously the easiest to use with respect to the pitch.

Playing in rhythm Vs. quantized playback

Most classical instruments require you trigger the sound at the right time with no help: you have to feel the rhythm, and then you must be accurate enough with your hands to trigger the sound (hit the drum, bow the string etc.) when you want it to be triggered. Again, this often requires training. By analogy with the pitch, we can consider that a continuous control of the rhythm .

Playing the pads on the MPC to trigger sounds
Playing the pads on the MPC to trigger sounds or to record a rhythmic pattern

Electronic groove boxes (Akai MPC in particular) and electronic sequencers do take care of triggering sound accurately on behalf of the musician, thanks to their quantization mechanism: you play once while the machine is recording, then the machine adjusts what you played against a metrical grid, and then repeats the “perfect” rhythm forever. We can consider that as a discrete control of the rhythm.

Pitch-Rhythm / Continuous-Discrete Controls instruments chart
Pitch-Rhythm / Continuous-Discrete Controls instruments playability analysis chart

Note that the grid does not have to be rigid as one can imagine, it can accommodate very strong swing, and quantization can also be applied partially, to fix a little while keeping some of the subtle human timing inaccuracy that can be desirable.

Discrete rhythm instruments (quantized rhythm) have a better playability than continuous rhythm instruments.

Talking about rhythm, the theremin is a bit special since a note is hardly triggered, instead one just controls the sound intensity (dynamics) with the loop antenna on the left. The rhythm is more than continuous…

Realtime quantization can only adjust notes later in time. One could imagine an instrument where notes can only be triggered when two conditions are met: when the player requests a note, and when a machine-generated rhythmic grid triggers a pulse. This would be a form of realtime quantization, which would make the instrument more accessible to non musicians, especially if they are poor in rhythm.

A vintage analogue step sequencer is an electronic device with buttons to decide whether to sound or not and a knob to adjust the pitch of the sound for each pulse of a fixed rhythmic grid. Each column of buttons and knobs is scanned in turn to trigger sounds for each beat. Playing such a sequencer does not require timing accuracy (it is a discrete rhythm instrument) but requires accuracy to set each independent pitch (because it’s set with a rotating know it is a continuous pitch instrument).

To sum up the analysis of pitch and rhythm controls as being either continuous or discrete (or none), here is above a simple chart that shows various instruments sorted by their continuous or discrete pitch and rhythm controls. It reads like this: “Violin is a continuous pitch and continuous rhythm instrument, a groove box is has no control on pitch and is a discrete rhythm instrument”.

The more we move to the upper right corner, the most difficult the instrument usually is (and probably the most expressive as well). The more we more down to the bottom left corner, the easier the instrument is (and probably the most frustrating as well). We therefore have defined a criterion to estimate the playability (or the other way round, expressiveness) of an instrument.

Guitar Hero has been put in the chart, in the bottom left corner, whereas it is not an instrument (as judged by a U.S. Court), since the actions on the fake guitar do not control any sound at all, they only enable to increase ones score.

Expressiveness through other controls

Strange instrument by Jean-François Laporte (Can)
Strange instrument by Jean-François Laporte (Can)

As listed above, playing music makes use of many controls in addition to pitch and rhythm.

Many classical instrument that rely on physical principles (string, air pressure…) provides to the musician a lot of natural controls: changes in the way a sax player breathes, hitting a drum in the center or near the outside, sliding changes in pitch on a string etc.

Electronic instruments used to be rather poor with respect to the expressive potential they provided, however there is now a huge range of devices that help modulate the sound in realtime: pedals, breathe controllers, knobs and sliders, keyboard aftertouch (pressure sensitivity after the key has been pressed).

Various modulation wheels for electronic keyboards from various manufacturers
Various modulation wheels or devices for electronic keyboards from various manufacturers

Typical modulation controls for electronic keyboards control the pitch bend (how to modulate the pitch continuously, by derogation to the discrete pitches of the keyboard), and the vibrato (again a fast and vibrating modulation of the pitch, also by derogation to the discrete pitches of the keyboard). They are often wheels, and sometimes joystick or even touchpad.

A turntable can be considered an instrument, a very limited one, if it provides a way to play a record at a faster or slower pace, hence controlling the tempo.


Non musicians prefer instruments with less control, or with discrete controls that are easier to get right, so that they can have fun without the risk of being out of tune or out of rhythm. We have reviewed this reflection for the primary musical controls: pitch and rhythm, including a way to estimate their playability.

Homemade breakbeat controller by Michael Carter aka Preshish Moments (USA)
Homemade breakbeat controller by Michael Carter aka Preshish Moments (USA)

However, when non-musicians become really interested in getting closer with music they do not want a one-button does it all music box such as Beamz. They want, just like professional musicians, to have enough control to express emotions through the instrument.

On the other end, being both a continuous pitch and a continuous intensity instrument, the theremin is definitely one of the most difficult instrument to play: “Easy to learn but notoriously difficult to master, theremin performance presents two challenges: reliable control of the instrument’s pitch with no guidance (no keys, valves, frets, or finger-board positions), and minimizing undesired portamento that is inherent in the instrument’s microtonal design” (Wikipedia).

This series of post follows a recent and very interesting discussion with Uros Petrevski. Many pictures were shot at the Festival Octopus in Feb 2009.

Read More