Make Money vs Reduce Risks dichotomy

In sports, football for example, players have only one goal in mind: score, score, again and again, as often as possible. Close to them, but not too close, arbiters have only one goal in mind: detect quickly all violations of the rules of the game, and sanction them.

The players know the rules, still we need an antagonism role, the arbiters, to keep the game fair. It is never perfect, but this is not plain chaos either.

Many mature businesses have chosen a similar structure. There is a role to make money, as much money as possible, and another role to control risk under an acceptable level.


In finance, this schema is visible at several levels. Traders and sales people in the front-office focus on making money, while officers in the risk department closely monitor their activity to control they don’t go too far. We hear very loud in the news when the traders go too far. We don’t hear much when the risk people go too far, but reducing risk usually hurts profitability in the short term.

This schema occurs again between banks, who want to make money, and the regulators, who is supposed to protect the country and the customers. That the regulators do a good job or not is not the point of this article, my point is that there is a common business pattern there.

When there is a common business pattern, and when the business is heavily supported by software systems, does this mean there is a corresponding pattern in the software itself? I believe there is, a bit like a generalized Conway’s Law. The corresponding software pattern is: when the business has an obvious antagonism like “Making Money vs Reducing Risk”, then it probably calls for two distinct Bounded Contexts in the corresponding software.

This dichotomy is not a rule, it is just an heuristics to suggest there may be a need for two distinct Bounded Contexts.

Who is the key decision maker is probably the question that shapes everything. I learnt that a few years ago in a course with @ziobrando. In particular, when two management hierarchies are involved, even if their visions coincides right now, it’s unlikely that both visions will evolve the same way over time. This is a reason to split the solution in two Bounded Contexts that will evolve independently. So if you have a Direction of Trading and the Direction of Risk, you’re in this situation.

Modeling in the two contexts

Making money typically involves good commercial relationships and a competitive pricing expertise, plus enough speed to react to opportunities.

Software systems for that typically manages the business one deal at a time. They often need to be real-time, or fast enough not to lose impatient customers. Sometimes we may even accept to trade calculation accuracy in exchange for speed. For example we may be using floating point calculations instead of Big Decimals, or an approximation instead of the exact formula.

Software systems to support making money need to help people doing the sales to be fast, for example with rich defaulting of the input values.

By contrast, software for officers who want to reduce or control risk often computes risk metrics out of a lot of deals. It may be fraud analysis or a stress tests simulating markets crisis. It is often just computing the overall risk taken by summing up the numbers from each deal. Some do that in realtime too, but usually it can accommodate much slower paces like on-demand, daily, weekly or even monthly.


Sometimes the competition is so tight that risk control becomes the key differentiator to make money between competitors. In this situation risk control has become another miniature sub-domain within the domain of selling and pricing. Still, it has its own risk-oriented perspective on the business, and it is like a delegation of responsibility from the risk officers to the front office people and their trading bots. Even in this situation there will also be a full-featured domain of risk control outside, with the corresponding software in its own Bounded Context.

A developer example

DevOps is the classical example in software development: developers want to release often to deliver more value. However ops people know that each release comes with risks for the production so they traditionally prefer to release less frequently. “No release, no risk” would be ideal for them.


In this scheme, developers and ops teams use different tools, and don’t monitor the same indicators. When they get closer as in DevOps, the ops usually delegate some risk control to the development team and their automated testing tools, but they keep their own expertise and their specific tooling.

Many thanks to @ziobrando and @mathiasverraes for the early feedback and some complements incorporated into the text.

Read More

The Actual vs Plan Dichotomy

I’ve once worked for the IT department of a large restaurant network. They had a secret sauce, and this wasn’t one you could eat.

In a restaurant kitchen, when a customer orders a dish on the menu you have to cook it. You take the recipe with a list of ingredients and their quantities, and you prepare the dish by following the instructions. In theory, if the recipe says you need a quarter of a lemon, then you should be able to cook 4 dishes with one lemon. Unfortunately, when you analyze the actual consumption of ingredients over a day, you realize that in practice you actually made 20 dishes with 6 lemons. That’s 3.3 dish by lemon! Life is not as nice as theory, in real life you have waste. A quarter of lemon falls on the floor and you can’t use it. A lemon was too small so you has to use a “bigger quarter”, and there are only 3 bigger quarters in one lemon. And the last lemon that was used could not be used completely because the number of dishes actually ordered is not a multiple of 4.


The restaurants network I’ve worked for were very good at analytical accounting. This was their secret sauce. They were actually monitoring the usage of lemons, among other things. That was key to their profitability, because businesses like restaurants have quite narrow margins. It was important for them to know better the gap between the recipe and the actual cooking of the dishes.

It happens that tracking the gaps between what was planned – the recipe, and what is actually built – the cooking, is important for lots of businesses in all domains.


In manufacturing, you have a bill of material, the BOM, that is necessary to build a product. The BOM states that you need 12 screws to close the enclosing. However in practice you may lose one screw and end up consuming 13, not 12. If the tracking report shows this situation happens often and if the screws are expensive enough, then some engineers will want to improve something on the production line to prevent that. You may automate, or make the screwdrivers magnetic.

In hardware electronic manufacturing you build silicon waffles that will make thousands of chips. This is the theory, before the quality control where you discover how many of them are dysfunctional. This kind of manufacturing is expensive, so you want to monitor that.

The point so far is that mature businesses have lots of competitors, and they often compete on continuous optimizations of the delivery process. You need tracking for that. That’s a recurring business problem, and you need software for that.

So how do we design such software in this situation?

The answer is that you most likely have two distinct Bounded Contexts: one for describing the plan, and another for tracking the gaps between the actual and the plan. And even if the two contexts often seem to deal with the same real-life concepts, they have in fact no reason to look similar.

The one context approach

At first, “the recipe from the book” and “the recipe as I’ve actually done it” look just the same thing. That’s why a common approach is to mix both models into one. I think this is a poor hack.

For example project management tools focus on the theoretical plan, and then they add extra fields to track the time actually burnt directly on the elements of the plan. It may work as long as it’s very simple. The more we move forward into complexity, the more it won’t make sense to keep the two models aligned. With a TODO list, “Done!” might be enough, but as the tracking becomes more complex, it will grow its own tracking language.

Different intentions

Modeling the recipe and the tracking of the actual cooking of the dish look similar, but they are totally different in their intentions. The recipe aims at telling how to prepare the dish, including instructions on how to stir or how to cut the vegetables. The tracking focuses on waste, or errors, or improving the purchase process or the supply chain management. That’s a strong reason to call for a different point of view.

Other differences

For tracking you may not be able to observe all details. The recipe talks about quarters of lemon, but a quarter of a lemon does not exist on the market. You can only measure the number of lemons you bought, and the number of remaining lemons at the end of the day.

What actually happens during cooking, during a manufacturing process or during a project can be totally different from the plan. Perhaps there was no more lemon, so the cook had to exceptionally replace with some lemon juice, an ingredient that is not even in the recipe. A good tracking model should accommodate the tracking of this kind of event.

On a similar note, for steering purposes, a coarse level of granularity is often enough. You can neglect details from the process. Perhaps a weekly or monthly inventory of the remaining ingredients is enough. Tracking the actual consumption of ingredients dish by dish would be a total waste of time!


Modeling the plan, the bill of material, the manufacturing steps etc. is not the challenging part of the Actual vs Plan Gap Tracking relationship.

Typically the tracking context needs its own distinct model, a totally different one than the planning model. For tracking cargos shipment, for example, you’d probably track loading and unloading events at various ports. The tracking model would be a journal of events. For tracking the consumption of materials, the tracking model would be a time-stamped snapshot of the current inventory.


The tracking model may link to the plan. This case is a bit like the Knowledge Level pattern (Fowler): the plan is the Knowledge Level, and the tracking of the actual production would be the Operation Level. Here the knowledge Level defines the ideal case, but the actual behavior at the operation level will regularly diverge from the ideal, because this discrepancy is indeed what we want to track.


The tracking can also live in its own bubble, with no link to the plan. If that’s the case, a separate reconciliation mechanism will do the comparison to highlight the gaps.

A developer example

As usual, as developers we are already familiar with this actual vs plan dichotomy: it’s just called testing.

A test defines the plan, thats the expected value, or the expected behavior on a mock. A test also observes the actual result of the code and tracks all gaps between them in the test results report. It’s obvious that the model of the plan and the model of tracking the actual results are different, and use a totally different language.

In this area, simulation testing also uses a reconciliation mechanism a posteriori to track the discrepancies.

Another example is simply software project planning and tracking. With software projects we know very well that forcing a match between plan and execution is a really dumb idea.


The key thing in this stereotypical relationship is to be fully aware of the two distinct contexts. Avoid the trap of mixing both contexts, unless it is a deliberate and conscious decision. Usually recognizing the two contexts will lead to the conclusion that distinct models are needed, and this will make everything simpler.

Many thanks to @ziobrando and @mathiasverraes for the early feedback and some complements incorporated into the text.

Read More

Canary tests

Canary Tests are minimal tests to quickly and automatically verify that everything you depend on is ready. You run Canary tests before other time-consuming tests, and before wasting time investigating in your code when the other tests are red. If the canary test fails, you know you have to fix something on the environments first.

This idea of Canary test is different from the Canary Deployment. In Canary Deployment you deploy to a small fraction of your users to check everything’s fine before rolling out to more users.

Save time by checking what should be always OK

Canary tests check for the obvious and frequent sources of issues, such as:

  • connectivity to network: firewall rules ok, ports open, proxy working fine, NAT, ping below a good threshold

  • Databases and middleware are up
  • disk quota for logs not almost full
  • every needed login and password is valid
  • installed software available in the right version: dll installed, registry set-up, environment variables set, user directories all exist, the frameworks and OS versions are fit, timezone and locale are as expected
  • reference data integrity and consistency (dates, valuations…) are ok
  • Database schema and audit of applied scripts are as expected
  • Licences are not expired (there is usually a way to check that automatically)

Canary tests should run regularly, ideally before any expensive tests like end-to-end tests. Of course you want to run them whenever there is a trouble somewhere, before wasting time on manual investigations in your code when the expected environment is not fully available.

Even at the code level, a canary test is just a trivial test to verify that the testing framework works correctly, as mentioned by Marcus on his blog:


Don’t forget to verify that your tests can fail too!

Simple and low-maintenance

The canary test tools should not assume much from the application. They must be independent from new developments to be as stable as possible. They should require little to no maintenance at all.

One way to do that in practice is to simply scan configuration files for every URL, password and just ping them one by one against a predefined time threshold. Any log path mentioned in the configuration files can be scanned and checked for the required write permissions and available disk space. Any login and password can be checked, even though this may be more complicated.

Canary tests are documentation too

Doing Canary tests may require explicit declarations of expectations, e.g. an annotation AssumedPermission(‘777’) to declare the permissions required on the files referenced in the configuration files. Alternatively you may rely on a Convention Over Configuration principle. For example every


variable is assumed  to be a log path to check against some predefined expectations like being writable and being ok with disk quota.

When you add canary tests, this automation itself is a form of documentation that makes assumption more explicit.

You could export a report of every canary test that has been ran into a readable form that can become part of your Living Documentation.

Read More

Øredev 2013 – What you probably missed

Øredev 2013 was last week, and it was fantastic!

Sharing knowledge

Øredev is in Malmö, Sweden. It’s very close to Copenhagen, so you can fly to there and then take a 20mn train to arrive in Malmö.

It’s a fantastic conference, totally vendor-neutral (that’s very important). It’s big yet friendly, with a mix of well established topics and more experimental ideas. This year the theme was “The Arts”, and as a result it was deliberately provoking or weird in some aspects, and that is a good thing!

For me the highlights of the conference were the radical ideas brought by two guys with some experience in the business: Woody Zuill and Fred George (I’ll come to it in a minute). I also enjoyed a lot how Jessica Kerr @jessitron manages to make alternative ways of thinking more accessible and attractive for developer using mainstream languages like Java or C#. Unfortunately I missed @Bodil talks because the room was too packed to be even able to open the door…

Before the conference you can attend 1-day trainings, and I decided to attend the Value-Driven Product Developmentcourse by JB Rainsberger @jbrains. It’s a very good course, more advanced and probably not for beginners. I knew a lot about BDD and has attended other courses already, yet I still learnt a lot during this workshop. I missed other talks from JB, but I want to watch the videos since I had very good feedbacks from other attendees.

It was interesting to listen to experience reports (New Frontiers For In-House Legal Practice by Kate Sullivan, Data @ King – How we are able analyze 100M DAU by Mats-Olov Eriksson, Curiosity killed the cat, but what kills curiosity?by Ann-Marie Charrett @charrett, Less is more! – when it comes to art and software, by @JimmyNilsson) with anecdotes and honest accounts of successes, failures and evolutions of mindset.

Radical Ideas

The Øredev program committee likes to take risks and challenge the way we think about software, as demonstrated by Woody and Fred talks, but also through the talk Code as a crime scene by Adam Petersen Tornhill @AdamTornhill?. Adam tries to reuse forensic methods used for crime investigations to help on large legacy code bases. He built the tool CodeMaat to visualize likely aggressions on the code base based on these ideas.

More radically, Woody Zuill @WoodyZuill talked about the Mob Programming approach his team has been practicing for some time now. He does not claim you should do the same, and he explains that this approach is just the result of doing more of the good things as found during retrospectives. His team found that working together on one task at a time on one single machine at a time was good, so they decided to do that all the time. You must watch his talks: Mob Programming, A Whole Team Approach (Roy “Woody” Zuill). It includes a time-lapse video and is very interesting. It also challenges the way we think about work. What if what the actual “work” was actually what’s between what we usually call “work”?

Very radical too, Fred Georges @fgeorge52 talked about his approach of Programmer Anarchy, “because that’s what it is”. He’s now replicated the experiment at two different companies including a rather traditional one (the Daily Mail newspaper), and is starting again in yet another. Again he does not claim you should do the same, just that it works for them. Again using the power of retrospectives, they got rid of every role except just the customer and the programmer roles. They don’t use the usual software craftsmanship practices like testing and refactoring. However they take great care of the business domain, just like a trader and a developer working closely together can end up giving suggestions to each other, in both ways. As Fred says: “Power to the programmer!”.

This approach works thanks to the use of Micro-Services. This style of architecture in itself is also a bit radical, with a “rapid”, an ordered bus of all the events of the whole system, and a lot of very small, cohesive, disposable micro-services that listen and publish to the bus. You can copy-paste a service to create another, you can rewrite a service rather than make changes, you can plug your new service directly into production! It may sound chaotic but in my opinion this style is disciplined indeed.

Woody gave another talk No Estimates: Let’s explore the possibilities (Roy “Woody” Zuill). It’s really a beautiful talk thanks to the beautiful illustrations from his wife Andrea. Woody does a great job at making us question our need for estimates, what it really means and how it can harm. More importantly, he suggests that estimates are an obstacle against delivering something truly wonderful!

I was lucky to spend some time talking with Woody and Fred, and what they do is very exciting. It’s a paradox, but both still really follow agile values, despite taking huge liberties with respect to the usual principles and practices. Both Fred and Woody also know a lot about object oriented principles and made sure their teams was skilled in that too. However in each case the experiments are also biased because of the very presence of outstanding developers like Fred or Woody!

Testing is not just checking

Software development requires a mix of many different skills. Some of the important skills revolve around testing. At Øredev you could listen and talk to some of the most notable representatives of the testing community: Heuristics of Testability (James Bach) @jamesmarcusbach, Regression Obsession (Michael Bolton) @michaelbolton, Balancing ATDD, GUI Automation and Exploratory Testing (Michael Larsen) @mkltesthead?, (Curiosity killed the cat, but what kills curiosity? by Ann-Marie Charrett @charrett). Other talks (The Beauty of Minimizing Effort and Maximizing Creativity While Integrating Performance Throughout the Lifecycle by Scott Barber and The Psychology of Testing, by M isko Hevery) were also about testing.

I realized that testing is much much more than just checking facts. There is a whole universe of testing practices that you are probably not even aware of, and most of this universe cannot and should not be automated.

Software development is a creative job!

As part of the theme “The Arts” some talks were not about software development. I really loved the talk Shakespeare in Dev (Thomas Q Brady) and the opening keynote of the second day “The Creativity (R)Evolution” by Denise Jacobs @DeniseJacobs. Denise managed to trigger the desire to write, talk and share insights from many attendees in the room during her keynote!

My talk

I was excited to talk at Øredev on Friday after lunch: Refactor your specs! (Cyrille Martraire) The room was almost full, which may suggest that the topic is of interest for many. As a speaker I loved the professionalism of the staff doing the video, sound and organization all around so that everything runs smooth for everyone. Thanks a lot to you all! Overall my talk was well received and I had many good questions and very good feedback’s. As I said, this talk is just the beginning of a conversation that will go on, so feel free to contribute.

All the Øredev videos are available on this page: (still not complete at the time of writing), so have fun and enjoy them all! Also have a look at the #oredev hashtag on Twitter for more quotes, and don’t forget to follow me at @cyriux on Twitter!






Read More

TDD Vs. math formalism: friend or foe?

It is not uncommon to oppose the empirical process of TDD, together with its heavy use of unit tests, to the more mathematically based techniques, with the “formal methods” and formal verification at the other end of the spectrum. However I experienced again recently that the process of TDD can indeed help discover and draw upon math formalisms well-suited to the problem considered. We then benefit from the math formalism for an easier implementation and correctness.

It is quite frequent that maths structures, or more generally “established formalisms” as Eric Evans would say, are hidden everywhere in the business concepts we need to model in software.

Dates and how we take liberties with them for trading of financial instruments offer a good example of a business concept with an underlying math structure: traders of futures often use a notation like ‘U8’ to describe an expiry date like September 2018; ‘U’ means September, and the ‘8’ digit refers to 2018, but also to 2028, and 2038 etc. Notice that this notation only works for 10 years, and each code is recycled every decade.

The IMM trading floor in the early 70's (photo CME Group)

In the case of IMM contract codes, we only care about quarterly dates on:

  • March (H)
  • June (M)
  • September (U)
  • December (Z)

This yields only 4 possibilities for the month, combined with the 10 possible year digits, hence 40 different codes in total, over the range of 10 years.

How does that translate into source code?

As a software developer we are asked all the time to manage such IMM expiry codes:

  • Sort a given set of IMM contract codes
  • Find the next contract from the current “leading month” contract
  • Enumerate the next 11 codes from the current “leading month” contract, etc.

This is often done ad hoc with a gazillion of functions for each use-case, leading to thousands of lines of code hard to maintain because they involve parsing of the ‘U8’ format everytime we want to calculate something.

With TDD, we can now tackle this topic with more rigor, starting with tests to define what we want to achieve.

The funny thing is that in the process of doing TDD, the cyclic logic of the IMM codes struck me and strongly reminded me of the cyclic group Z/nZ. I had met this strange maths creature at school many years ago, I had a hard time with it by the way. But now on a real example it was definitely more interesting!

The source code (Java) for this post is on Github.

Draw on established formalisms

Thanks to Google it is easy to find something even with just a vague idea of how it’s named, and thanks to Wikipedia, it is easy to find out more about any established formalism like Cyclic Groups. In particular we find that:

Every finite cyclic group is isomorphic to the group { [0], [1], [2], …, [n ? 1] } of integers modulo n under addition

The Wikipedia page also mentions a concept of the product of cyclic groups in relation with their order (here the number of elements). Looks like this is the math-ish way to say that 4 possibilities for quarterly months combined with 10 possible year digits give 40 different codes in total.

So what? Sounds like we could identify the set of the 4 months to a cyclic group, the set of the 10 year digits to another, and that even the combination (product) of both also looks like a cyclic group of order 10 * 4 = 40 (even though the addition operation will not be called like that). So what?

Because we’ve just seen that there is an isomorphism between any finite cyclic group and the cyclic group of integer of the same order, we can just switch to the integer cyclic group logic (plain integers and the modulo operator) to simplify the implementation big time.

Basically the idea is to convert from the IMM code “Z3” to the corresponding ‘ordinal’ integer in the range 0..39, then do every operation on this ‘ordinal’ integer instead of the actual code. Then we can format back to a code “Z3” whenever we really need it.

Do I still need TDD when I have a complete formal solution?

I must insist that I did not came to this conclusion as easily. The process of TDD was indeed very helpful not to get lost in every possible direction along the way. Even when you have found a formal structure that could solve your problem in one go, even in a “formal proof-ish fashion”, then perhaps you need less tests to verify the correctness, but you sure still need tests to think on the specification part of your problem. This is your gentle reminder that TDD is not about unit tests.

Partial order in a cyclic group

Given a list of IMM codes we often need to sort them for display. The problem is that a cyclic group has no total order, the ordering depends on where you are in time.

Let’s take the example of the days of the week that also forms a cycle: MONDAY, TUESDAY, WEDNESDAY…SUNDAY, MONDAY etc.

If we only care about the future, is MONDAY before WEDNESDAY? Yes, except if we’re on TUESDAY. If we’re on TUESDAY, MONDAY means next MONDAY hence comes after WEDNESDAY, not before.

This is why we cannot unfortunately just implement Comparable to take care of the ordering. Because we need to consider a reference IMM code-aware partial order, we need to resort to a Comparator that takes the reference IMM code in its constructor.

Once we identify that situation to the cyclic group of integers, it becomes easy to shift both operands of the comparison to 0 before comparing them in a safe (total order-ish) way. Again, this trick is made possible by the freedom to experiment given by the TDD tests. As long as we’re still green, we can go ahead and try any funky approach.

Try it as a kata

This example is also a good coding kata that we’ve tried at work not long ago. Given a simple presentation of the format of an IMM contract code, you can choose to code the sort, find the next and previous code, and perhaps even optimize for memory (cache the instances, e.g. lazily) and speed (cache the toString() value, e.g. in the constructor) if you still have some time.

In closing

Maths structures are hidden behind many common business concepts. I developed an habit to look for them whenever I can, because they always help make us think, they help question our understanding of the domain problem (“is my domain problem really similar in some way to this structure?”), and of course because they often offer wonderful ready-made implementation hints!

The source code (Java) for this post is on Github.
Follow me on Twitter!
Photo: CME Group

Read More

DDD is back in Paris with a brand new Meetup group!

The first DDD Open Forum of the brand new Paris DDD meetup was last night, hosted by Arolla, and it was good to meet again after a long time with twenty-some Paris DDD aficionados!

@tjaskula, the organizer of this new group, opened the evening with a welcome introduction. He also gave many suggestions of areas for discussion and debate.

A quick survey revealed that one third of the participants were new to Domain-Driven Design, while another third was on the other hand rather comfortable with it. This correlated with a rather senior audience, with only one attendee with less than 5 years experience and many 10+ years developers, including 22 years and 30 years experience developers, and still coding! If you work in Paris, I guess you know them already…

It was an open space session, so we first proposed a lot of topics for discussion with post-its on the wall: how to sell or convince about DDD, introduction on concepts, synchronizing between contexts…

We all decided to start with a walk through of the fundamentals of DDD: Bounded Contexts, Ubiquitous Language, Code as Model… It was great to have this two-way knowledge transfer between seniors and juniors, in an interactive fashion and with lot of questions, including some rather challenging and skeptical ones! There was also some UML bashing of course.

We concluded by eating Galettes des Rois, together with cider and beer, and a lot of fun. Thanks everyone for your questions and contributions, and see you soon on next meetup!

The many proposals for discussion

Read More

Surface-area over volume ratio – a metaphor for software design

There’s a metaphor I had in mind for a long time when thinking about software design: because I’m proudly lazy, in order to make the code smaller and easier to learn, I must do my best to reduce the “surface-area over volume ratio” of the software.

Surface-area over volume ratio?

I like the Surface-area over volume ratio as a metaphor to express how to make software cheaper to discover and learn, and smaller to maintain as well.

For a given object, the surface-area over volume ratio is the amount of surface area per unit volume. For buildings and for animals, the smaller this ratio, the less the heat loss during the winter, hence a better thermal efficiency.

Have you ever noticed that huge warehouses were always cool even during the summer when it’s hot? This is just because in our real 3D world the surface-area over volume ratio is much smaller when the absolute size of the building increases.

The theory also mentions that the sphere is the optimal shape with respect to this ratio. In fact, the more “compact” the less the ratio, or the other way round we could define compactness of an object directly by its surface-area-over-volume ratio.

A dodecahedron, a volume that approximates a sphere with just 2D facets (Wikipedia picture)

What about software design?

Let’s consider that each method signature of each interface is part of the surface-area of the software, because this is what I have to learn primarily when I join the project. The larger the surface-area, the more time I’ll need to learn, provided I can even remember all of it.

Larger surface is not good for the developers.

On the  other hand, the implementation is part of what I would call the volume of the software, i.e. this is where the code is really doing its stuff. The more volume, the more powerful and richer the software. And of course the point of Object Orientation is that you don’t have to learn all the implementation in order to work on the project, as opposed to the interfaces and their method signatures.

Larger volume is good for the users (or the value brought by the software in general)

As a consequence we should try to minimize the surface-area over volume ratio, just like we’re trying to reduce it when designing a green building.

Can we extrapolate that we should design software to be more “compact” and more “sphere”-like?

Facets-like interfaces

Reusing the same interface as much as possible is obviously a way to reduce the surface-area of the software. Adhering to interfaces from the JDK or Google Guava, interfaces that are already well-known, helps even better: in our metaphor, an interface that we don’t have to learn comes for free, like a perfectly isolated wall in a building. We can almost omit it in our ratio.

To further reduce the ratio, we can find out every opportunity to use as much as possible the minimum set of common interfaces, even over unrelated concepts. At the extreme of this approach we get duck typing in dynamic languages. In our usual languages like Java or C# we must introduce additional small interfaces, usually with one single method.

For example in a trading system, every class with a isInCurrency(Currency) method can implement a common interface CurrencySpecific. As a result, a lot of processing (filtering etc.) on stuff that is related to currencies in some way can be done on all these classes without any knowledge about them, except their currency-specificity.

In this example, the currency-specificity we extracted into one interface is like a single facet over a larger volume made of several implementation. It makes our design more compact, it will be easier to learn, while offering a rich set of behaviors.

The limit for this approach of putting a lot of implementation code under the same interface is that sometimes it really makes no domain sense. Since code is primarily meant to describe the domain, without causing confusion we must be careful not to go too far. We must also take great care when sharing interfaces between bounded contexts, there’s a high risk of excessive coupling.

Faceted artwork (picture from

Yet another metric?

This metric could be measured by a tool, however the primary value is not in checking the figures, but in the thinking and taking care of making the design easy to learn (less surface-area), while delivering a lot of valuable behaviors (more volume).

Follow me on Twitter!

Read More

Collaborative Artifacts as Code

A software development project is a collaborative endeavor. Several team members work together and produce artifacts that evolve continuously over time, a process that Alberto Brandolini (@ziobrando) calls Collaborative Construction. Regularly, these artifacts are taken in their current state and transformed into something that become a release. Typically, source code is compiled and packaged into some executable.

The idea of Collaborative Artifacts as Code is to acknowledge this collaborative construction phase and push it one step further, by promoting as many collaborative artifacts as possible into plain text files stored in the same source control, while everything else is generated, rendered and archived by the software factory.

Collaborative artifacts are the artifacts the team works on and maintains over time thanks to the changes done by several people through a source control management such as SVN, TFS or Git, with all their benefits like branching and versioning.

Keep together what varies together

The usual way of storing documentation is to put MS Office documents into a shared drive somewhere, or to write random stuff in a wiki that is hardly organized.

Either way, this documentation will quickly get out of sync because the code is continuously changing, independently of the documents stored somewhere else, and as you know, “Out of sight, out of mind”.

we now have better alternatives

We now have better alternatives

Over the last few years, there has been changes in software development. Github has popularized the overview file written in Markdown. DevOps brought the principle of Infrastructure as Code. The BDD approach introduced the idea of text scenarios as a living documentation and an alternative for both specifications and acceptance tests. New ways of planning what a piece of software is supposed to be doing have appeared as in Impact Mapping.

All this suggests that we could replace many informal documents by their more structured alternatives, and we could have all these files collocated within the source control together with the source.

In any given branch in the source control we would then have something like this:

  • Source code (C#, Java, VB.Net, VB, C++)
  • Basic documentation through plain and perhaps other .md files wherever useful to give a high-level overview on the code
  • SQL code as source code too, or through Liquibase-style configuration
  • Living Documentation: unit tests and BDD scenarios (SpecFlow/Cucumber/JBehave feature files) as living documentation
  • Impact Maps (and every other mindmaps), may be done as text then rendered via tools like text2mindmap
  • Any other kind of diagrams (UML or general purpose graphs) ideally be defined in plain text format, then rendered through tools (Graphviz, yUml).
  • Dependencies declarations as manifest instead of documentation on how to setup and build manually (Maven, Nuget…)
  • Deployment code as scripts or Puppet manifests for automated deployment instead of documentation on how to deploy manually

Plain Text Obsession is a good thing!

Nobody creates software by editing directly the executable binary that the users actually run at the end, yet it is common to directly edit the MS Word document that will be shipped in a release.

Collaborative Artifacts as Code suggests that every collaborative artifact should be text-based to work nicely with source control, and to be easy to compare and merge between versions.

Text-based formats shall be preferred whenever possible, e.g. .csv over xls, rtf or .html over .doc, otherwise the usual big PPT files must go to another dedicated wiki where they can be safely forgotten and become instantly deprecated…

Like a wiki, but generated and read-only

My colleague Thomas Pierrain summed up the benefits of this approach, for a documentation:

  • always be up-to-date and versioned
  • easily diff-able (text filesn e.g. with Markdown format)
  • respect the DRY principle (with the SCM as its golden source)
  • easily browsable by everyone (DEV, QA, BA, Support teams…) in the readonly and readable wiki-like web site
  • easily modifiable by team members in a well know and official location (as easy as creating or modifying a text file in a SCM)

What’s next?

This approach is nothing really new (think about LateX…), and many of the tools we need for it already exist (Markdown renderers, web site to organize and display Gherkin scenarios…). However I have never seen this approach fully done in an actual project. Maybe your project is already doing that? please share your feedback!

UPDATE: My colleague Thomas Pierrain wrote a post on this idea:

Read More

The art of creative ceramic at Atelier Murmur

Xinhe, Jinjin and Zhuo ‘Jo’ form a collective of 3 young Chinese ceramic designers who together experiment the art of fine ceramics under the name “Atelier Murmur”. Their studio is based in Hangzhou, in China. This city is famous for its beautiful lake, the “West Lake”, and for its preserved and restored traditional houses and streets nearby. If you intend to visit China, you really should consider to go there!

A story between France and China

Xinhe, Jinjin have lived a few years in France, from Marseille to Paris, where we met then and became friends. Zhou also studied in France for some time. They now all moved back to China to open a bigger studio in the Art Village, surrounded by the beautiful green tea fields.

We’ve had the pleasure to visit their newly installed studio, and to discover their brand new own big ceramic oven, a dream equipment for many ceramists.

I knew their ceramic work for some time, and we already own some beautiful pieces they designed, yet it was exciting to discover how they make them in their studio. From the liquid clay to the shaped object like a glass or bowl drying in a plaster mold, I realized how little I knew about ceramic-making. It has very little to do with potery!

Creation is all about taking risks

Moreover they are designers, not just makers, and as such they experiment with creative ways to facture pieces, using tree leaves or textile ribbons to imprint custom textures to the ceramic.

It’s a trial and error process. They take a lot of risks in the process of trying new things, and as such they are familiar with failures like broken pieces. As they explain: “you don’t learn if you don’t go out of your comfort zone”. As a software developer we’re also familiar with this way of thinking…

With a mainstream culture of materialism in China these days, it’s very refreshing to see young innovators like the Murmur artists working together to make their creative idealism become a realistic way of life. They know that money and business success is not the only “quality of life” metric there is. Doing what you live and living up to your true aspirations matters a lot.

That said, it does not necessarily have to interfere with commercial success either. They’ve been selling a lot of all hand-made and unique pieces like glasses, bowls and jewels in Paris streets when they started. Now they have orders for batches of hundreds of pieces, for art galeries and various clients, yet still all hand-made in their studio.

Even if they love the creative work most, from time to time they also love to sell face to face with the customers in order to get direct feedback, and this is important for them. Again as an agile software developer we’re also familiar with this way of thinking!

Find out more about their creations at Atelier Murmur.

Read More

Collaborative Construction by Alberto Brandolini

Alberto Brandolini (@ziobrando) gave a great talk at the last Domain-Driven Design eXchange in London. In this talk, among many other insights, he described a recurring pattern he had seen many times in several very different projects: “Collaborative Construction, Execution & Tracking. Sounds familiar? Maybe we didn’t notice something cool”

Analysis conflicts are hints

In various unrelated projects we see similar problems. Consider a project that deals with building a complex artifact like an advertising campaigns. Creating a campaign involves a lot of different communication channels between many people.

On this project, the boss said:

We should prevent the user from entering incorrect data.

Sure you don’t want to have corrupt data, which is reasonable: you don’t want to launch a campaign with wrong or corrupt data! However the users were telling a completely different story:

[the process with all the strict validations] cannot be applied in practice, there’s no way it can work!

Why this conflict? In fact they are talking about two different processes, and they could not notice that. Sure, it takes the acute eyes of a Domain-Driven Design practitioner to recognize that subtlety!

Alberto mentions what he calls the “Notorious Pub Hypothesis“: think about the pub where all the bad people gather at night, where you don’t go if you’re an honest citizen. The hypothesis comes from his mother asking:

Why doesn't the police shut down this place?

Why doesn’t the police shut down this place? Actually there is some value in having this kind of place, since the police knows where all the bad guys are, it makes it easier to find them when you have to.

In a similar fashion, maybe there’s also a need somewhere for invalid data. What happens before we have strictly validated data?  Just like the bad guys who exist even if we don’t like it, there is a whole universe outside of the application, in which the users are preparing the advertising campaign with more than one month of preparation of data, lots of emails and many other communication types, and all that is untraceable so far.

Why not acknowledge that and include this process, a collaborative process, directly into the application?

Similar data, totally different semantics

Coming from a data-driven mindset, it is not easy to realize that it’s not because the data structures are pretty much the same that you have to live with only one type of representation in your application. Same data, completely different behavior: this suggests different Bounded Contexts!

The interesting pattern recurring in many applications is a split between two phases: one phase where multiple stakeholders collaborate on the construction of a deliverable, and a second phase where the built deliverable is stored, can be reviewed, versioned, searched etc.

The natural focus of most projects seems to be on the second phase; Alberto introduced the name Collaborative Construction to refer to the first phase, often missed in the analysis. Now we have a name for this pattern!

The insight in this story is to acknowledge the two contexts, one of collaborative construction, the other on managing the outcome of the construction.

Looks like “source Vs. executable”

During collaborative construction, it’s important to accept inconsistencies, warnings or even errors, incomplete data, missing details, because the work is in progress, it’s a draft. Also this work in progress is by definition changing quickly thanks to the contributions of every participant.

Once the draft is ready, it is then validated and becomes the final deliverable. This deliverable must be complete, valid and consistent, and cannot be changed any more. It is there forever. Every change becomes a new revision from now on.

We therefore evolve from a draft semantics to a “printed” or “signed” semantics. The draft requires comments, conversations, proposals, decisions. On the other hand the resulting deliverable may require a version history and release notes.

The insight that we have  these two different bounded contexts now in turn helps dig deeper the analysis, to discover that we probably need different data and different behaviors for each context.

Some examples of this split in two contexts:

  • The shopping cart is a work in progress, that once finalized becomes an order
  • A request for quote or an auction process is a collaborative construction in search of the best trade condition, and it finally concludes (or not) into a trade
  • A legal document draft is being worked on by many lawers, before it is signed off to become the legally binding contract, after the negotiations have happened.
  • An example we all know very well, our source code in source control is a work in progress between several developers, and then the continuous integration compiles it into an executable and a set of reports, all immutable. It’s ok to have compilation errors and warnings while we’re typing code. It’s ok to have checkstyle violations until we commit. Once we release we want no warning and every test to pass. If we need to change something, we simply build another revision, each release cannot change (unless we patch but that’s another gory story)

UX demanding

Building software to deal with collaborative construction is quite demanding with respect to the User Experience (UX).

Can we find examples of Collaborative Construction in software? Sure, think about Google Wave (though it did not end well), Github (successful but not ready for normal users that are not developers), Facebook (though we’re not building anything useful with it).

Watch the video of the talk

Another note, among many other I took away from the talk, is that from time to time we developers should ask the question:

what if the domain expert is wrong?

It does happen that the domain expert is going to mislead the team and the project, because he’s giving a different answer every day, or because she’s focusing on only one half of the full domain problem. Or because he’s evil…

Alberto in front of Campbell's Soup Cans, of course talking about Domain-Driven Design (picture Skillsmatter)

And don’t hesitate to watch the 50mn video of the talk, to hear many other lessons learnt, and also because it’s fun to listen to Alberto talking about zombies while talking about Domain-Driven Design!

Follow me (@cyriux) on Twitter!

Read More