Xinhe, Jinjin and Zhuo ‘Jo’ form a collective of 3 young Chinese ceramic designers who together experiment the art of fine ceramics under the name “Atelier Murmur”. Their studio is based in Hangzhou, in China. This city is famous for its beautiful lake, the “West Lake”, and for its preserved and restored traditional houses and streets nearby. If you intend to visit China, you really should consider to go there!
A story between France and China
Xinhe, Jinjin have lived a few years in France, from Marseille to Paris, where we met then and became friends. Zhou also studied in France for some time. They now all moved back to China to open a bigger studio in the Art Village, surrounded by the beautiful green tea fields.
We’ve had the pleasure to visit their newly installed studio, and to discover their brand new own big ceramic oven, a dream equipment for many ceramists.
Overview of the studio
The beautiful surrounding tea fields
a square bowl drying in its plaster mold
Taking risks means you encounter failure…
The big oven and its electronic temperature control unit
Xinhe has a passion for creation and ceramic, and it shows!
The liquid clay waiting to become creative
Zhou talking about their work
Working slowly, it’s very fragile
Xinhe in front of the studio
I knew their ceramic work for some time, and we already own some beautiful pieces they designed, yet it was exciting to discover how they make them in their studio. From the liquid clay to the shaped object like a glass or bowl drying in a plaster mold, I realized how little I knew about ceramic-making. It has very little to do with potery!
Creation is all about taking risks
Moreover they are designers, not just makers, and as such they experiment with creative ways to facture pieces, using tree leaves or textile ribbons to imprint custom textures to the ceramic.
It’s a trial and error process. They take a lot of risks in the process of trying new things, and as such they are familiar with failures like broken pieces. As they explain: “you don’t learn if you don’t go out of your comfort zone”. As a software developer we’re also familiar with this way of thinking…
With a mainstream culture of materialism in China these days, it’s very refreshing to see young innovators like the Murmur artists working together to make their creative idealism become a realistic way of life. They know that money and business success is not the only “quality of life” metric there is. Doing what you live and living up to your true aspirations matters a lot.
That said, it does not necessarily have to interfere with commercial success either. They’ve been selling a lot of all hand-made and unique pieces like glasses, bowls and jewels in Paris streets when they started. Now they have orders for batches of hundreds of pieces, for art galeries and various clients, yet still all hand-made in their studio.
Even if they love the creative work most, from time to time they also love to sell face to face with the customers in order to get direct feedback, and this is important for them. Again as an agile software developer we’re also familiar with this way of thinking!
Alberto Brandolini (@ziobrando) gave a great talk at the last Domain-Driven Design eXchange in London. In this talk, among many other insights, he described a recurring pattern he had seen many times in several very different projects: “Collaborative Construction, Execution & Tracking. Sounds familiar? Maybe we didn’t notice something cool”
Analysis conflicts are hints
In various unrelated projects we see similar problems. Consider a project that deals with building a complex artifact like an advertising campaigns. Creating a campaign involves a lot of different communication channels between many people.
On this project, the boss said:
We should prevent the user from entering incorrect data.
Sure you don’t want to have corrupt data, which is reasonable: you don’t want to launch a campaign with wrong or corrupt data! However the users were telling a completely different story:
[the process with all the strict validations] cannot be applied in practice, there’s no way it can work!
Why this conflict? In fact they are talking about two different processes, and they could not notice that. Sure, it takes the acute eyes of a Domain-Driven Design practitioner to recognize that subtlety!
Alberto mentions what he calls the “Notorious Pub Hypothesis“: think about the pub where all the bad people gather at night, where you don’t go if you’re an honest citizen. The hypothesis comes from his mother asking:
Why doesn’t the police shut down this place? Actually there is some value in having this kind of place, since the police knows where all the bad guys are, it makes it easier to find them when you have to.
In a similar fashion, maybe there’s also a need somewhere for invalid data. What happens before we have strictly validated data? Just like the bad guys who exist even if we don’t like it, there is a whole universe outside of the application, in which the users are preparing the advertising campaign with more than one month of preparation of data, lots of emails and many other communication types, and all that is untraceable so far.
Why not acknowledge that and include this process, a collaborative process, directly into the application?
Similar data, totally different semantics
Coming from a data-driven mindset, it is not easy to realize that it’s not because the data structures are pretty much the same that you have to live with only one type of representation in your application. Same data, completely different behavior: this suggests different Bounded Contexts!
The interesting pattern recurring in many applications is a split between two phases: one phase where multiple stakeholders collaborate on the construction of a deliverable, and a second phase where the built deliverable is stored, can be reviewed, versioned, searched etc.
The natural focus of most projects seems to be on the second phase; Alberto introduced the name Collaborative Construction to refer to the first phase, often missed in the analysis. Now we have a name for this pattern!
The insight in this story is to acknowledge the two contexts, one of collaborative construction, the other on managing the outcome of the construction.
Looks like “source Vs. executable”
During collaborative construction, it’s important to accept inconsistencies, warnings or even errors, incomplete data, missing details, because the work is in progress, it’s a draft. Also this work in progress is by definition changing quickly thanks to the contributions of every participant.
Once the draft is ready, it is then validated and becomes the final deliverable. This deliverable must be complete, valid and consistent, and cannot be changed any more. It is there forever. Every change becomes a new revision from now on.
We therefore evolve from a draft semantics to a “printed” or “signed” semantics. The draft requires comments, conversations, proposals, decisions. On the other hand the resulting deliverable may require a version history and release notes.
The insight that we have these two different bounded contexts now in turn helps dig deeper the analysis, to discover that we probably need different data and different behaviors for each context.
Some examples of this split in two contexts:
The shopping cart is a work in progress, that once finalized becomes an order
A request for quote or an auction process is a collaborative construction in search of the best trade condition, and it finally concludes (or not) into a trade
A legal document draft is being worked on by many lawers, before it is signed off to become the legally binding contract, after the negotiations have happened.
An example we all know very well, our source code in source control is a work in progress between several developers, and then the continuous integration compiles it into an executable and a set of reports, all immutable. It’s ok to have compilation errors and warnings while we’re typing code. It’s ok to have checkstyle violations until we commit. Once we release we want no warning and every test to pass. If we need to change something, we simply build another revision, each release cannot change (unless we patch but that’s another gory story)
UX demanding
Building software to deal with collaborative construction is quite demanding with respect to the User Experience (UX).
Can we find examples of Collaborative Construction in software? Sure, think about Google Wave (though it did not end well), Github (successful but not ready for normal users that are not developers), Facebook (though we’re not building anything useful with it).
Watch the video of the talk
Another note, among many other I took away from the talk, is that from time to time we developers should ask the question:
what if the domain expert is wrong?
It does happen that the domain expert is going to mislead the team and the project, because he’s giving a different answer every day, or because she’s focusing on only one half of the full domain problem. Or because he’s evil…
Alberto in front of Campbell's Soup Cans, of course talking about Domain-Driven Design (picture Skillsmatter)
And don’t hesitate to watch the 50mn video of the talk, to hear many other lessons learnt, and also because it’s fun to listen to Alberto talking about zombies while talking about Domain-Driven Design!
You write code to deliver business value, hence your code deals with a business domain like e-trading in finance, or the navigation for an online shoe store. If you look at a random piece of your code, how much of what you see tells you about the domain concepts? How much of it is nothing but technical distraction, or “noise”?
Like the snow on tv
I remember TV used to be not very reliable long ago, and you’d see a lot of “snow” on top of the interesting movie. Like in the picture below, this snow is actually a noise that interferes with the interesting signal.
TV signal hidden behind snow-like noise
The amount of noise compared to the signal can be measured with the signal-to-noise ratio. Quoting the definition from Wikipedia:
Signal-to-noise_ratio (often abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. It is defined as the ratio of signal power to the noise power. A ratio higher than 1:1 indicates more signal than noise.
We can apply this concept of signal-to-noise ratio to the code, and we must try to maximize it, just like in electrical engineering.
Every identifier matters
Look at each identifier in your code: package names, classes and interfaces names, method names, field names, parameters names, even local variables names. Which of them are meaningful in the domain, and which of them are purely technicalities?
Some examples of class names and interface names from a recent project (a bit changed to protect the innocents) illustrate that. Identifiers like “CashFlow”or “CashFlowSequence” belong to the Ubiquitous Language of the domain, hence they are the signal in the code.
Examples of classnames as signals, or as noise
On the other hand, identifiers like “CashFlowBuilder” do not belong to the ubiquitous language and therefore are noise in the code. Just counting the number of “signal” identifiers over the number of “noise” identifiers can give you an estimate of your signal-to-noise ratio. To be honest I’ve never really counted to that level so far.
However for years I’ve been trying to maximize the signal-to-noise ratio in the code, and I can demonstrate that it is totally possible to write code with very high proportion of signal (domain words) and very little noise (technical necessities). As usual it is just a matter of personal discipline.
Logging to a logging framework, catching exceptions, a lookup from JNDI and even @Inject annotations are noise in my opinion. Sometimes you have to live with this noise, but everytime I can live without I definitely chose to.
For the domain model in particular
All these discussion mostly focuses on the domain model, where you’re supposed to manage everything related to your domain. This is where the idea of a signal-to-noise ratio makes most sense.
A metric?
It’s probably possible to create a metric for the signal-to-noise ratio, by parsing the code and comparing to the ubiquitous language “dictionary” declared in some form. However, and as usual, the primary interest of this idea is to keep it in mind while coding and refactoring, as a direction for action, just like test coverage.
I introduced the idea of signal-to-code ratio in my talk at DDDx 2012, you can watch the video here. Follow me (@cyriux) on Twitter!
While Java 8 is coming, are you sure you know well the enums that were introduced in Java 5? Java enums are still underestimated, and it’s a pity since they are more useful than you might think, they’re not just for your usual enumerated constants!
Java enum is polymorphic
Java enums are real classes that can have behavior and even data.
Let’s represent the Rock-Paper-Scissors game using an enum with a single method. Here are the unit tests to define the behavior:
@Test
public void paper_beats_rock() {
assertThat(PAPER.beats(ROCK)).isTrue();
assertThat(ROCK.beats(PAPER)).isFalse();
}
@Test
public void scissors_beats_paper() {
assertThat(SCISSORS.beats(PAPER)).isTrue();
assertThat(PAPER.beats(SCISSORS)).isFalse();
}
@Test
public void rock_beats_scissors() {
assertThat(ROCK.beats(SCISSORS)).isTrue();
assertThat(SCISSORS.beats(ROCK)).isFalse();
}
And here is the implementation of the enum, that primarily relies on the ordinal integer of each enum constant, such as the item N+1 wins over the item N. This equivalence between the enum constants and the integers is quite handy in many cases.
/** Enums have behavior! */
public enum Gesture {
ROCK() {
// Enums are polymorphic, that's really handy!
@Override
public boolean beats(Gesture other) {
return other == SCISSORS;
}
},
PAPER, SCISSORS;
// we can implement with the integer representation
public boolean beats(Gesture other) {
return ordinal() - other.ordinal() == 1;
}
}
Notice that there is not a single IF statement anywhere, all the business logic is handled by the integer logic and by the polymorphism, where we override the method for the ROCK case. If the ordering between the items was not cyclic we could implement it just using the natural ordering of the enum, here the polymorphism helps deal with the cycle.
You can do it without any IF statement! Yes you can!
This Java enum is also a perfect example that you can have your cake (offer a nice object-oriented API with intent-revealing names), and eat it too (implement with simple and efficient integer logic like in the good ol’ days).
Over my last projects I’ve used a lot enums as a substitute for classes: they are guaranted to be singleton, have ordering, hashcode, equals and serialization to and from text all built-in, without any clutter in the source code.
If you’re looking for Value Objects and if you can represent a part of your domain with a limited set of instances, then the enum is what you need! It’s a bit like the Sealed Case Class in Scala, except it’s totally restricted to a set of instances all defined at compile time. The bounded set of instances at compile-time is a real limitation, but now with continuous delivery, you can probably wait for the next release if you really need one extra case.
Well-suited for the Strategy pattern
Let’s move to to a system for the (in-)famous Eurovision song contest; we want to be able to configure the behavior on when to notify (or not) users of any new Eurovision event. It’s important. Let’s do that with an enum:
/** The policy on how to notify the user of any Eurovision song contest event */
public enum EurovisionNotification {
/** I love Eurovision, don't want to miss it, never! */
ALWAYS() {
@Override
public boolean mustNotify(String eventCity, String userCity) {
return true;
}
},
/**
* I only want to know about Eurovision if it takes place in my city, so
* that I can take holidays elsewhere at the same time
*/
ONLY_IF_IN_MY_CITY() {
// a case of flyweight pattern since we pass all the extrinsi data as
// arguments instead of storing them as member data
@Override
public boolean mustNotify(String eventCity, String userCity) {
return eventCity.equalsIgnoreCase(userCity);
}
},
/** I don't care, I don't want to know */
NEVER() {
@Override
public boolean mustNotify(String eventCity, String userCity) {
return false;
}
};
// no default behavior
public abstract boolean mustNotify(String eventCity, String userCity);
}
And a unit test for the non trivial case ONLY_IF_IN_MY_CITY:
@Test
public void notify_users_in_Baku_only() {
assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", "BAKU")).isTrue();
assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", Paris")).isFalse();
}
Here we define the method abstract, only to implement it for each case. An alternative would be to implement a default behavior and only override it for each case when it makes sense, just like in the Rock-Paper-Scissors game.
Again we don’t need the switch on enum to choose the behavior, we rely on polymorphism instead. You probably don’t need the switch on enum much, except for dependency reasons. For example when the enum is part of a message sent to the outside world as in Data Transfer Objects (DTO), you do not want any dependency to your internal code in the enum or its signature.
For the Eurovision strategy, using TDD we could start with a simple boolean for the cases ALWAYS and NEVER. It would then be promoted into the enum as soon as we introduce the third strategy ONLY_IF_IN_MY_CITY. Promoting primitives is also in the spirit of the 7th rule “Wrap all primitives” from the Object Calisthenics, and an enum is the perfect way to wrap a boolean or an integer with a bounded set of possible values.
Because the strategy pattern is often controlled by configuration, the built-in serialization to and from String is also very convenient to store your settings.
Perfect match for the State pattern
Just like the Strategy pattern, the Java enum is very well-suited for finite state machines, where by definition the set of possible states is finite.
A baby as a finite state machine (picture from www.alongcamebaby.ca)
Let’s take the example of a baby simplified as a state machine, and make it an enum:
/**
* The primary baby states (simplified)
*/
public enum BabyState {
POOP(null), SLEEP(POOP), EAT(SLEEP), CRY(EAT);
private final BabyState next;
private BabyState(BabyState next) {
this.next = next;
}
public BabyState next(boolean discomfort) {
if (discomfort) {
return CRY;
}
return next == null ? EAT : next;
}
}
And of course some unit tests to drive the behavior:
@Test
public void eat_then_sleep_then_poop_and_repeat() {
assertThat(EAT.next(NO_DISCOMFORT)).isEqualTo(SLEEP);
assertThat(SLEEP.next(NO_DISCOMFORT)).isEqualTo(POOP);
assertThat(POOP.next(NO_DISCOMFORT)).isEqualTo(EAT);
}
@Test
public void if_discomfort_then_cry_then_eat() {
assertThat(SLEEP.next(DISCOMFORT)).isEqualTo(CRY);
assertThat(CRY.next(NO_DISCOMFORT)).isEqualTo(EAT);
}
Yes we can reference enum constants between them, with the restriction that only constants defined before can be referenced. Here we have a cycle between the states EAT -> SLEEP -> POOP -> EAT etc. so we need to open the cycle and close it with a workaround at runtime.
We indeed have a graph with the CRY state that can be accessed from any state.
I’ve already used enums to represent simple trees by categories simply by referencing in each node its elements, all with enum constants.
Enum-optimized collections
Enums also have the benefits of coming with their dedicated implementations for Map and Set: EnumMap and EnumSet.
These collections have the same interface and behave just like your regular collections, but internally they exploit the integer nature of the enums, as an optimization. In short you have old C-style data structures and idioms (bit masking and the like) hidden behind an elegant interface. This also demonstrate how you don’t have to compromise your API’s for the sake of efficiency!
To illustrate the use of these dedicated collections, let’s represent the 7 cards in Jurgen Appelo’s Delegation Poker:
public enum AuthorityLevel {
/** make decision as the manager */
TELL,
/** convince people about decision */
SELL,
/** get input from team before decision */
CONSULT,
/** make decision together with team */
AGREE,
/** influence decision made by the team */
ADVISE,
/** ask feedback after decision by team */
INQUIRE,
/** no influence, let team work it out */
DELEGATE;
There are 7 cards, the first 3 are more control-oriented, the middle card is balanced, and the 3 last cards are more delegation-oriented (I made that interpretation up, please refer to his book for explanations). In the Delegation Poker, every player selects a card for a given situation, and earns as many points as the card value (from 1 to 7), except the players in the “highest minority”.
It’s trivial to compute the number of points using the ordinal value + 1. It is also straightforward to select the control oriented cards by their ordinal value, or we can use a Set built from a range like we do below to select the delegation-oriented cards:
public int numberOfPoints() {
return ordinal() + 1;
}
// It's ok to use the internal ordinal integer for the implementation
public boolean isControlOriented() {
return ordinal() < AGREE.ordinal();
}
// EnumSet is a Set implementation that benefits from the integer-like
// nature of the enums
public static Set DELEGATION_LEVELS = EnumSet.range(ADVISE, DELEGATE);
// enums are comparable hence the usual benefits
public static AuthorityLevel highest(List levels) {
return Collections.max(levels);
}
}
EnumSet offers convenient static factory methods like range(from, to), to create a set that includes every enum constant starting between ADVISE and DELEGATE in our example, in the declaration order.
To compute the highest minority we start with the highest card, which is nothing but finding the max, something trivial since the enum is always comparable.
Whenever we need to use this enum as a key in a Map, we should use the EnumMap, as illustrated in the test below:
// Using an EnumMap to represent the votes by authority level
@Test
public void votes_with_a_clear_majority() {
final Map<AuthorityLevel, Integer> votes = new EnumMap(AuthorityLevel.class);
votes.put(SELL, 1);
votes.put(ADVISE, 3);
votes.put(INQUIRE, 2);
assertThat(votes.get(ADVISE)).isEqualTo(3);
}
Java enums are good, eat them!
I love Java enums: they’re just perfect for Value Objects in the Domain-Driven Design sense where the set of every possible values is bounded. In a recent project I deliberatly managed to have a majority of value types expressed as enums. You get a lot of awesomeness for free, and especially with almost no technical noise. This helps improve my signal-to-noise ratio between the words from the domain and the technical jargon.
Or course I make sure each enum constant is also immutable, and I get the correct equals, hashcode, toString, String or integer serialization, singleton-ness and very efficient collections on them for free, all that with very little code.
(picture from sys-con.com – Jim Barnabee article)”]The power of polymorphism
The enum polymorphism is very handy, and I never use instanceof on enums and I hardly need to switch on the enum either.
I’d love that the Java enum is completed by a similar construct just like the case class in Scala, for when the set of possible values cannot be bounded. And a way to enforce immutability of any class would be nice too. Am I asking too much?
Also <troll>don’t even try to compare the Java enum with the C# enum…</troll>
Beam Beats is an interactive and tangible rhythm sequencer that translates the geometry of beacons on the ground into rhythms and polyrhythms thanks to a rotating laser beam. This experimental MIDI instrument is about investigating self-similarities in polyrhythms, as described in this post.
Update: Here’s the video of a demo at Dorkbot at La Gaité Lyrique in Paris
Before I report more on this project with demos and videos, here are some pictures to illustrate the work currently in progress, thanks to lasercutting techniques in particular:
The brand new laser-cut acrylic structureThe prototype laser head and retro-reflection light sensorThe Arduino to detect the light variations and convert them into MIDI notesVery early design of beacons each with their own light sensor
This prototype works, here’s a picture of the laser beam rotating:
I’ve been working on the Arduino source code to add the ability to send MIDI clock in order to sync Beam Beats to other MIDI devices.
Now I mostly need to conclude on the design for the active beacons(sensors stands) and the passive (retroreflective adhesive stands) beacons, and find out what to play with this sequencer…
How do you measure quality? Number of defects? Customer happiness? Money earned? Developer smiles? That’s the question raised by @gojkoadzic in his keynote at the recent BDD and Agile Testing Exchange in London, to make us think and propose some elements of response.
We tend to ignore information
We are used to ignore automatic alarms, even fire alarms: we just don’t care, but we care when a person tells us about an issue or is shouting “fire in the building!”.
Security Poster in a mine
As in the story of the Gloves on the Boardroom Table, making the problem tangible and visible help trigger reactions. The book Switch gives examples of such surprising techniques. Sharing information is not enough, a company is like an elephant, you don’t really drive it, you motivate it to go where you’d like to go. So visualization can help there.
Best practices change over time, as the techniques in athleticism replace each other. For example at one company they disable most of their functional tests after a feature is developed, because they impede the frequency of their release cycle; on the other hand, they closely monitor the conversion rate of the website, which is for them the best metrics since it encompasses both the technical bugs and the business bugs.
This does not mean you should disable your functional tests, but it illustrate that what may seem nonsense today may be the next best practice tomorrow.
Absence of bugs ? Presence of quality
The absence of bugs is completely different than the presence of quality. Twitter has lots of bugs but most people are happy with it so it has quality; on the other hand, Nokia phones have no bug, yet if nobody likes them they have no quality either.
Do we have quality here?
So how to measure quality? If you put 20 people in a room with 3 post-it’s each and ask them to write what attributes of the product they think are the most important, and if you get a small enough group of common answers then they are candidates of things you need to measure.
Usually quality c an be measured at 3 levels (the 3 P’s):
Process effectiveness
Product status
Production performance
Here I have the feeling that each of the first two is just a proxy for the next, and only the last is the real one, though I’m not sure.
Test coverage is crap
Taking an example of a UI, a window with 5 buttons on it it will likely requires 5 time more testing than one with only one button, even if the later brings a lot more value, which suggests that test coverage is crap.
If test coverage itself is crap, you may however monitor your risk coverage, that is the amount of testing compared to the associated risk. This metrics helps identify where you need to test more.
Visualize to make information actionable
Once you know what to measure, you want to visualize it. James Bach proposes his technique of low-tech testing dashboard. You divide your system into high-level subsystems, and for each you monitor the “progress of testing”: not yet started, half done or done, and the “level of confidence so far” with 3 smileys from happy to anxious. That’s better than a bug-tracking tool with 500 tickets mixed altogether.
Another visualization mode is the use of effects maps, or ACC matrix. You may also monitor in realtime the feed of sentiment of every Twitter message about your company or product as they do at FINN Technology in Oslo.
As a conclusion, Gojko introduces the brand new visualisingquality.org website where you can find many ways to visualize your measures of quality and where you can contribute additional ones. Since making things visible is so essential for action, let’s try ideas and share them through this website!
Software development technologies and trends are not particularly tangible things, yet we need to reason on them. At Arolla, the company I’m part of, we’ve designed an “ancient world map” of software development, as a cartography of the universe of software development we live in. Built for our own purpose, we also share it so you can benefit from it.
The Arolla "ancient world map" of our software development universe
If you want to use the map with your own teams, please do so (it’s licensed CC-NC-ND). If you need a high-resolution file for print, just ask (the file is quite big). We’d love to get your feedback!
The metaphor of an ancient world map
Agile and in XP suggest using metaphors to help materialize abstract stuff, and make it easier to grasp. You’ve probably seen Eric Evans (Domain-Driven Design) showing a picture of an ancient world map when presenting his concept of “a model built for a purpose”. We wanted to materialize software development technologies and trends, for which we have no clear and accurate visualisation yet, just like explorers in the middle age had no accurate knowledge of the world, lands and oceans. Ancient world maps had to represent that part of ignorance, with dragons and strange creatures on the less-known areas.
So we chose this metaphor to represent our universe. And ancient world maps are beautiful too!
Orchestrate the elements of the map to best convey its message to its audience.
In our map, each continent represents a particular chapter of related technical stuff , and oceans in between represents “soft” techniques that complement them best. Of course the universe of software development has many more dimensions than the two dimensions available on a map. This means that such map is quite subjective, it depends on our own mental model. On the other hand, this is also true for any map of the physical world, that is also supposed to represents a 3D planet onto a plane, with some deep decision on whether to preserve angles or distances.
We tried to put most related technologies as close as possible to each other. Regions in the middle of the map represent the core of a developer daily work, in contrast with the regions closer to the poles which are more specialized.
Of course not every existing technology and trend was included on the map, especially the ones that we do not want to offer to our clients or that are of less importance. As a fan of DDD I regret we could not include our clients domains (finance, e-commerce, media, e-advertising, online games…) on the map, but a map is for a purpose, just like a model. A map of everything would just be useless.
Drawing the map proved a lot more work than expected, with hundreds of layers and lots of little adjustments everywhere, resulting into a huge file for printing.
To discuss skills and areas of expertise
Why this map int the first place? At Arolla we wanted to organise a session for all our consultants to discover more about each other from a skills point of view. We also wanted more senior consultants to volunteer and take ownership on some areas of their technical expertise.
We didn’t want another boring evening with slides shows in front of a passive audience. We wanted a more concrete, more fun and more engaging way to talk about technical skills and areas of knowledge.
So we printed the map on a big poster laid on a table, like in the captain room in an old vessel, and made it into a game.
Map + Lego = game
During last June DDD Immersion course at skillmaster, Alberto told me about Lego Serious Play. This game helps people express their ideas and be creative using Lego bricks as a medium, to help say crazy things in front of other people. It also makes any discussion a lot more fun! So we bought a set of Lego mini-figures, including some crazy figures, for our consultants to play with on the map (we did not follow the actual Lego Serious Play game at all).
The first part of the game was to get familiar with the map. I was shouting the name of various technologies at random, and the first to find out where it is on the map would take a marshmallow and place it as a flag (we had lots of sweeties everywhere too). This part was fast-paced and didn’t take long as it was quickly obvious that everyone had understood and indexed the map in their mind already.
Putting marshmallow on the map
In the next part, we had to create our own Lego persona, using the Lego building blocks available. It was really fun, and everyone was really happy to be given the opportunity to play with Lego again. It was nice to see that nobody built a mini-figure as planned on the box, they were all very personal, ironical or really weird.
Each consultant then told his/her career by moving his/her Lego custom figure over the big map, starting from some technology (e.g. C++) “in this continent” before “moving to another territory” (e.g. Java or .Net), then “crossing oceans” to gain additional skills in software factory and agile.
I particularly enjoyed the relaxed atmosphere, where anyone would interrupt to ask questions or comment, or throw a joke out loud. Perhaps the map, the Lego and the marshmallow made it clear that it was not too serious an exercise. The map and the Lego figures were able to make abstract skills more tangible and more fun for the time of the game.
We could have done the same exercise without all that, but the feedback we had from our consultants was very positive, they really enjoyed the game.
Putting chamallows on the map
The Arolla “ancient world map” of the software development universe
What’s next?
We’re going to put the big poster of the map on the wall, as a decoration. Smaller printed maps will help scope some discussions, perhaps for interviews, or when discussing with less technology-involved people. It could also be useful for developers to self-assert how much they know about the current state of the art in our craft.
We did it again! On May 15 we had our third jam session at home on a Sunday afternoon.
What?
As usual, total improvisation: switch the machines on, start the recording, then everyone plays whatever he/she likes!
Nothing was prepared in any way, except the sounds loaded in the MPC. We had no lyrics prepared, next time it could be a good idea to prepare something…
Who?
Rongrong, Yuan, Muzhou, Pierrick, Yunshan and Cyrille performed this session. 6 hours recorded in total, aggressively cut to keep the most “interesting” bits:
Everyone more or less played on everything, with most lead vocals by Rongrong. We unfortunately did not record any flute solo from Pierrick (next time you need to perform in front of a mike!).
For the curious
The setup was organised as a ring with an MPC 500 for the beats, a microKorg for the leads, a KaossPad for effects and as a looper for the mic or the mixer subgroup, all midi-clock sync-ed, plus an old Casio keyboard with nice cheesy sounds. Enjoy!
The Composite pattern is a very powerful design pattern that you use regularly to manipulate a group of things through the very same interface than a single thing. By doing so you don’t have to discriminate between the singular and plural cases, which often simplifies your design.
Yet there are cases where you are tempted to use the Composite pattern but the interface of your objects does not fit quite well. Fear not, some simple refactorings on the methods signatures can make your interfaces Composite-friendly, because it’s worth it.
Always start with examples
Imagine an interface for a financial instrument with a getter on its currency:
public interface Instrument {
Currency getCurrency();
}
This interface is alright for a single instrument, however it does not scale for a group of instruments (Composite pattern), because the corresponding getter in the composite class would look like (notice that return type is now a collection):
public class CompositeInstrument {
// list of instruments...
public Set getCurrencies() {...}
}
We must admit that each instrument in a composite instrument may have a different currency, hence the composite may be multi-currency, hence the collection return type. This breaks the goal of the Composite pattern which is to unify the interfaces for single and multiple elements. If we stop there, we now have to discriminate between a single Instrument and a CompositeInstrument, and we have to discriminate that on every call site. I’m not happy with that.
The composite pattern applied to a lamp: same plug for one or several lamps
The brutal approach
The brutal approach is to generalize the initial interface so that it works for the composite case:
public interface Instrument {
Set getCurrencies() ;
}
This interface now works for both the single case and the composite case, but at the cost of always having to deal with a collection as return value. In fact I’m not that sure that we’ve simplified our design with this approach: if the composite case is not used that often, we even have complicated the design for little benefit, because the returned collection type always goes on our way, requiring a loop every time it is called.
The trick to improve that is just to investigate what our interface is really used for. The getter on the initial interface only reveals that we did not think about the actual use before, in other words it shows a design decision “by default”, or lack of.
Turn it into a boolean method
Very often this kind of getter is mostly used to test whether the instrument (single or composite) has something to do with a given currency, for example to check if an instrument is acceptable for a screen in USD or tradable by a trader who is only granted the right to trade in EUR.
In this case, you can revamp the method into another intention-revealing method that accepts a parameter and returns a boolean:
public interface Instrument {
boolean isInCurrency(Currency currency);
}
This interface remains simple, is closer to our needs, and in addition it now scales for use with a Composite, because the result for a Composite instrument can be derived from each result on each single instrument and the AND operator:
public class CompositeInstrument {
// list of instruments...
public boolean isInCurrency(Currency currency) {
boolean result;
// for each instrument, result &= isInCurrency(currency);
return result;
}
}
Something to do with Fold
As shown above the problem is all about the return value. Generalizing on boolean and their boolean logic from the previous example (‘&=’), the overall trick for a Composite-friendly interface is to define methods that return a type that is easy to fold over successive executions. For example the trick is to merge (“fold”) the boolean result of several calls into one single boolean result. You typically do that with AND or OR on boolean types.
If the return type is a collection, then you could perhaps merge the results using addAll(…) if it makes sense for the operation.
Technically, this is easily done when the return type is closed under an operation (magma), i.e. when the result of some operation is of the same type than the operand, just like ‘boolean1 AND boolean2‘ is also a boolean.
This is obviously the case for boolean and their boolean logic, but also for numbers and their arithmetic, collections and their sets operations, strings and their concatenation, and many other types including your own classes, as Eric Evans suggests you favour “Closure of Operations” in his book Domain-Driven Design.
Fire hydrants: from one pipe to multiple pipes (composite)
Turn it into a void method
Though not possible in our previous example, void methods work very well with the Composite pattern: with nothing to return, there is no need to unify or fold anything:
public class CompositeFunction {
List functions = ...;
public void apply(...) {
// for each function, function.apply(...);
}
}
Continuation-passing style
The last trick to help with the Composite pattern is to adopt the continuation passing style by passing a continuation object as a parameter to the method. The method then sets its result into it instead of using its return value.
As an example, to perform search on every node of a tree, you may use a continuation like this:
public class SearchResults {
public void addResult(Node node){ // append to list of results...}
public List getResults() { // return list of results...}
}
public class Node {
List children = ...;
public void search(SarchResults sr) {
//...
if (found){
sr.addResult(this);
}
// for each child, child.search(sr);
}
}
By passing a continuation as argument to the method, the continuation takes care of the multiplicity, and the method is now well suited for the Composite pattern. You may consider that the continuation indeed encapsulates into one object the process of folding the result of each call, and of course the continuation is mutable.
This style does complicates the interface of the method a little, but also offers the advantage of a single allocation of one instance of the continuation across every call.
That's continuation passing style (CC Some rights reserved by 2011 BUICK REGAL)
One word on exceptions
Methods that can throw exceptions (even unchecked exceptions) can complicate the use in a composite. To deal with exceptions within the loop that calls each child, you can just throw the first exception encountered, at the expense of giving up the loop. An alternative is to collect every caught exception into a Collection, then throw a composite exception around the Collection when you’re done with the loop. On some other cases the composite loop may also be a convenient place to do the actual exception handling, such as full logging, in one central place.
In closing
We’ve seen some tricks to adjust the signature of your methods so that they work well with the Composite pattern, typically by folding the return type in some way. In return, you don’t have to discriminate manually between the single and the multiple, and one single interface can be used much more often; this is with these kinds of details that you can keep your design simple and ready for any new challenge.
In functional programming, Map and Fold are two extremely useful operators, and they belong to every functional language. If the Map and Fold operators are so powerful and essential, how do you explain that we can do our job using Java even though the Java programming language is lacking these two operators? The truth is that you already do Map and Fold when you code in Java, except that you do them by hand each time, using loops.
Disclaimer: I’m not a reference in functional programming and this article is nothing but a gentle introduction; FP aficionados may not appreciate it much.
You’re already familiar with it
Imagine a List<Double> of VAT-excluded amounts. We want to convert this list into another corresponding list of VAT-included amounts. First we define a method to add the VAT to one single amount:
Now let’s apply this method to each amount in the list:
public List<Double> addVAT(List<Double> amounts, double rate){
final List<Double> amountsWithVAT = new ArrayList<Double>();
for(double amount : amounts){
amountsWithVAT.add(addVAT(amount, rate));
}
return amountsWithVAT;
}
Here we create another output list, and for each element of the input list, we apply the method addVAT() to it and then store the result into the output list, which has the exact same size. Congratulations, as we have just done, by hand, a Map on the input list of the method addVAT(). Let’s do it a second time.
Now we want to convert each amount into another currency using the currency rate, so we need a new method for that:
public double convertCurrency(double amount, double currencyRate){return amount / currencyRate;}
Now we can apply this method to each element in the list:
public List<Double> convertCurrency(List<Double> amounts, double currencyRate){
final List<Double> amountsInCurrency = new ArrayList<Double>();
for(double amount : amounts){
amountsInCurrency.add(convertCurrency(amount, currencyRate));
}
return amountsInCurrency;
}
Notice how the two methods that accept a list are similar, except the method being called at step 2:
create an output list,
call the given method for each element from the input list and store the result into the output list
return the output list.
You do that often in Java, and that’s exactly what the Map operator is: apply a given method someMethod(T):T to each element of a list<T>, which gives you another list<T> of the same size.
Functional languages recognize that this particular need (apply a method on each element of a collection) is very common so they encapsulate it directly into the built-in Map operator. This way, given the addVAT(double, double) method, we could directly write something like this using the Map operator:
List amountsWithVAT = map (addVAT, amounts, rate)
Yes the first parameter is a function, as functions are first-class citizens in functional languages so they can be passed as parameter. Using the Map operator is more concise and less error-prone than the for-loop, and the intent is also much more explicit, but we don’t have it in Java…
So the point of these examples is that you are already familiar, without even knowing, with a key concept of functional programming: the Map operator.
And now for the Fold operator
Coming back to the list of amounts, now we need to compute the total amount as the sum of each amount. Super-easy, let’s do that with a loop:
public double totalAmount(List<Double> amounts){
double sum = 0;
for(double amount : amounts){
sum += amount;
}
return sum;
}
Basically we’ve just done a Fold over the list, using the function ‘+=’ to fold each element into one element, here a number, incrementally, one at a time. This is similar to the Map operator, except that the result is not a list but a single element, a scalar.
This is again the kind of code you commonly write in Java, and now you have a name for it in functional languages: “Fold” or “Reduce”. The Fold operator is usually recursive in functional languages, and we won’t describe it here. However we can achieve the same intent in an iterative form, using some mutable state to accumulate the result between iterations. In this approach, the Fold takes a method with internal mutable state that expects one element, e.g. someMethod(T), and applies it repeatedly to each element from the input list<T>, until we end up with one single element T, the result of the fold operation.
Typical functions used with Fold are summation, logical AND and OR, List.add() or List.addAll(), StringBuilder.append(), max or min etc.. The mindset with Fold is similar to aggregate functions in SQL.
Thinking in shapes
Thinking visually (with sloppy pictures), Map takes a list of size n and returns another list of the same size:
On the other hand, Fold takes a list of size n and returns a single element (scalar):
You may remember my previous articles on predicates, which are often used to filter collections into collections with less elements. In fact this filter operator is the third standard operator that complements Map and Fold in most functional languages.
Eclipse template
Since Map and Fold are quite common it makes sense to create Eclipse templates for them, e.g. for Map:
Getting closer to map and fold in Java
Map and Fold are constructs that expect a function as a parameter, and in Java the only way to pass a method is to wrap it into a interface.
In Apache Commons Collections, two interfaces are particularly interesting for our needs: Transformer, with one method transform(T):T, and Closure, with one single method execute(T):void. The class CollectionUtils offers the method collect(Iterator, Transformer) which is basically a poor-man Map operator for Java collections, and the method forAllDo() that can emulate the Fold operator using closures.
A similar transform() method is also available on the classes Lists for Lists and Maps for Maps.
To emulate the Fold operator in Java, you can use a Closure interface, e.g. the Closure interface in Apache Commons Collection, with one single method with only one parameter, so you must keep the current -mutable- state internally, just like ‘+=’ does.
Unfortunately there is no Fold in Guava, though it is regularly asked for, and there even no closure-like function, but it is not hard to create your own, for example, you can implement the grand total above with something like this:
// the closure interface with same input/output type
public interface Closure<T> {
T execute(T value);
}
// an example of a concrete closure
public class SummingClosure implements Closure<Double> {
private double sum = 0;
public Double execute(Double amount) {
sum += amount; // apply '+=' operator
return sum; // return current accumulated value
}
}
// the poor man Fold operator
public final static <T> T foreach(Iterable<T> list, Closure<T> closure) {
T result = null;
for (T t : list) {
result = closure.execute(t);
}
return result;
}
@Test // example of use
public void testFold() throws Exception {
SummingClosure closure = new SummingClosure();
List<Double> exVat = Arrays.asList(new Double[] { 99., 127., 35. });
Double result = foreach(exVat, closure);
System.out.println(result); // print 261.0
}
Not only for collections: Fold over trees and other structures
The power of Map and Fold is not limited to simple collections, but can scale to any navigable structure, in particular trees and graphs.
Imagine a tree using a class Node with its children. It may be a good idea to code once the Depth-First and the Breadth-First searches (DFS & BFS) into two generic methods that accept a Closure as single parameter:
public class Node ...{
...
public void dfs(Closure closure){...}
public void bfs(Closure closure){...}
}
I have regularly used this technique in the past, and I can tell it can cut the size of your classes big time, with only one generic method instead of many similar-looking methods that would each redo their own tree traversal. More importantly, the traversal can be unit-tested on its own using a mock closure. Each closure can also be unit-tested independently, and all that just makes your life so much simpler.
A very similar idea can be realized with the Visitor pattern, and you are probably already familiar with it. I have seen many times in my code and in the code of several other teams that Visitors are well-suited to accumulate state during the traversal of the data structure. In this case the Visitor is just a special case of closure to be passed for use in the folding.
One word on Map-Reduce
You probably heard of the pattern Map-Reduce, and yes the words “Map” and “Reduce” in it refer to the same functional operators Map and Fold (aka Reduce) we’ve just seen. Even though the practical application is more sophisticated, it is easy to notice that Map is embarrassingly parallel, which helps a lot for parallel computations.