Agile France 2016 – Decentralized Architecture

At Agile France 2016 in Paris I ran an open-space session on Decentralized Architecture. This was an opportunity to collect various perspectives on a topic that I’ve been thinking about and discussing with colleagues over the last few years.

Architecture Definition Ambiguity

We started with a quick survey with all the attendees, who were asked to give “examples of architecture”, in any form and in their own interpretation of the word.

Going through the examples, the very first realization was that in practice there are many different visions of architecture:

  • “Preventing accidental complexity”
  • “Ensuring consistency”, “avoiding doing the same thing twice”, “coordination between multiple applications”
  • “The norms & blueprints: REST API, Service-Orientation…”
  • “The Main Decisions, especially the structuring decisions”
  • “The Vision, Consistency, Technical Debt management”
  • “The Coding guidelines: testable code, ability to extend…”
  • Psychological comfort: “We expect the architect to comfort the team and reassure them on the quality of their decisions”

Some of these definitions focus on the purpose, other on the solutions. We don’t mean the same thing at all when we use the word “architecture”. The definition of architecture is disturbingly ambiguous.

But this little survey was also a way for many to express their frustrations with respect to doing architecture and in particular with the architects.

The Architecture Frustration

The bottleneck effect

The most obvious complaint was that architecture as usually done by a central architecture group is a source of delays:

“architects slow us down by several weeks on any new project”.

As a result, it’s not uncommon for projects to game the rules to avoid going through the architects. For example if all projects beyond 100 man/day have to go through a mandatory architecture review, then a team with a genuine 120man/day project may try to split it into two 60 man/day projects to go under the threshold.

Conflicting schools of thoughts.

The other complaint is that “The architects impose arbitrary rules we disagree with, and that are detrimental to our project”.

Perhaps the team is just wrong, in which case the architects are trying to improve the things but they fail to convince. But perhaps the architects are outdated in their world view. In any case, when then teams and the architects follow different schools of thoughts then you can expect confusion, mis-communication and distrust.

Activity over Role

At this point of the session, another big realization was that we should not confuse the activity of “doing architecture” and the role of “the architect”. Taking care of the architecture is the most important, regardless of weather  you have a role of architect or not. This was the point put forward by my colleague Ly-Jia (@Ly_Jia) in her past talks on the topic.

Architecture as an activity over Architect as a role

Of course, when a team does not know how to architect (the activity), then they call an architect (the role) for help, who will bring their skills to solve the issues.

Architect as the Decision-Maker

While we mentioned before that some teams try to avoid dealing with their architects, other actually wish they had access to architects for advices or even to make the decisions.

“We have no architect and we should have one to reduce our current mess”

“Without architect, it’s chaos.”

“We expect the architect to take the important decisions that are vital to the project”.

In this view, the architect is the decision-maker for the team. The team is then supposed to follow the decisions. The architects should check the conformity after the work is done, even if they usually have no time to do that.

The architect as the decision-maker comes at a cost: it slows you down. This is hard to accept when you want short time-to-market. If you knew how to make decisions yourself in the team, you could go faster.

When the team knows how to do architecture

When the team has all the skills then it’s perfect, at least for their project. Decisions are local, fast, and well-informed.

“We don’t have an architect and it’s fine”.

This is an ideal situation we should nurture and encourage. However in practice not every team has all the skills. For example when surveying what’s important about architecture during the session at the conference, hardly anyone mentioned Data Authority as a concern.

Teams are probably not self-sufficient in all necessary skills. They probably need training and external help from people more skilled in architecture.

Architect as the Trainer on Architecture Skills

Many teams need training in architecture, and the architect is often the right person to teach that to the teams. In this view, the architect does not take decisions, but acts as a trainer, until there’s no need for training any more. It’s a biodegradable role, but with many teams, some turnover and the evolution of architecture skills, don’t expect that role to become useless anytime soon.

In fact this role of training the team already happens to some extent and informally during the meetings with the architects, even when the architects are the decision-makers. If they explain the decision-making and the rationale behind the decisions, then the attendees can learn how to reason like the architect. And perhaps next time the team can decide on its own in confidence.

Codifying the Architecture Skills

In a company I worked for, I used to collect every principle behind our decision-making in a list we called “the Codex”. This was an attempt to codify the way architects make decisions.

There were rules like “always know where’s the authoritative source of data”, or “don’t ever talk about solutions before you have stated the problem”, or “YAGNI: Don’t build a framework, build only what you need (it’s up to the next project to decide if they want to extract a framework from your stuff)”.
The main problem with the Codex was that it was quickly growing bigger and bigger. Last time I looked there were 30+ principles listed.

Even if everything can work fine locally thanks to skilled teams, perhaps the global consistency can still be endangered by the local ignorance of the bigger context.

Taking care of the architecture within the team can work at small scale. At larger scale, we need specific skills and more importantly we need specific visibility over many applications: this cross-system knowledge becomes the rare resource, only known by a few people. In bigger companies the central architects usually play this role of bringing this visibility to local teams to help them make informed decisions. But they remain bottlenecks on the path of the project delivery.

Bottleneck-free Architecture

Communities of Practices are a standard solution to give visibility on the overall state of the system to all teams. For example, delegates from each team meet in a 2-hours meetings every 2 weeks to exchange on the knowledge of the global architecture, the current stakes and challenges, what’s already existing somewhere else etc. It’s also a place to harmonize what they call architecture and what is desirable under this name.

In this view, the architecture can be seen as “the invisible frame, defined by consensus, which states what we like the system to be like”, as Jonathan (@jonathanperret) expressed it.

In addition to communities of Practices, tools can help too. For example GitHub Enterprise and its built-in search engine offers a simple way to explore the large scale galaxy of projects by searching for keywords. For example this is helpful to find out who’s also working on “Refund policies” in order to coordinate with them. And there is more we can do in this area to help make the bigger context accessible for a larger audience.

Large-Scale Governance

At larger scale, governance becomes a topic that matters. We want to manage the legacy, decide which components we want to decommission and which components we would like to invest into to make them more useful.

At this level we often talk about a “master plan” (“schema directeur”). This is usually done by committee, and it can take a lot of time to define. This kind of master plan aims at optimizing the whole information system, its consistency, the data consistency, the investments in relation with the strategy, etc. This is again a bottleneck, and a large-scale one indeed.

If we want to avoid this bottleneck, then we want to favor local architects, embedded within teams or within small departments. They spend most of their time working with their colleagues, doing work and ideally wiring code as well. And they also meet regularly between peers to exchange and coordinate on company-wide blueprints, in a federated fashion.

Tools can help as well. Registry-like documentation, aka “applications catalogues” for example can enable anyone to have an overview of the overall system and their components.

However at large scale these documents are unlikely to be accessible to anyone but architects, as they are so complicated, with tons of boxes and arrows, or huge lists of recommendations, all with specific jargon.

Another problem at this scale is that the quantity of information to consider to reason on the architecture vastly exceeds the ability of any single brain. It’s just not possible for an architect, let alone for each team member, to be aware of every application, SLA and other concerns in a big company.

Coordination-less Architecture

Going a bit more radical at this point, is it really so obvious that this desirable “consistency” is always a good thing? Did anyone ever really investigated this question? Similarly, is waste necessarily a bad thing?

When we talk about all the expected benefits of architecture we easily see the value (remove duplication, standardize all the things, decide on THE unique tool for each function…) but we hardly see all the costs.

In particular most people ignore or underestimate the huge coordination cost associated with consensus on large-scale decisions. Most people also ignore or underestimate the cost of one-size-fits-all kind of decisions.

For example, contrast how standards are made by consortium, such as Corba or the various WS-* standards, with Internet standards, which require that for a standard to be chosen, there has to be at least two working implementations already available.

In the second approach we accept some waste (work being done more than once) for the sake of proving that a good approach is really a good one. In the first approach, we try to be perfect upfront, at the expense of the coordination cost of a long and expensive process. This is waste too, just a different kind of waste.

Once we accept some waste, we can embrace some diversity too. “Amazon now has one single deployment tool, but it eventually emerged from a diversity of 3 or 4 ones that used to be accepted for years”. It was ok to have more than one tool for a given functionality. In particular, the choice of the standard tool did not delays he delivery of any project.

Of course this doesn’t mean that diversity should be infinite either. “At Spotify they run meetings to limit the number of technologies currently in use”. But it’s an evolving thing. Anyone should be able to try a new technology for a while, which may or may not become part of the standard technology list later.

On the governance of the applications, which is also an evolving thing, we could learn to renew each part more often. We could put an expiration date on each component. Or limit their maximum size to ensure they can be re-written in a short period of time (microservices anyone?) This is in contrast with master plans that ambition to predict the future and try to precise the “target” of the information system. In fact I don’t know any sensible architect these days who still believes you could precise the target of the information system in details.

Emerging Architecture from Local Decisions

Going further, at large scale you don’t really have the choice on the way you do architecture. Consider Amazon which now has 1400 services: at this scale, you just can’t do architecture the old-fashion say, it’s impossible.

But you know that at Amazon each team lives by a limited number of strict rules that are absolutely mandatory: “2 pizza team”, “API-first” etc, as defined by Jeff Bezos in his famous memo. Few rules, strict, and promulgated by a dictator.

The challenge is to identify a set of dictatorial rules that do not constraint too much the freedom to deliver wonders, and that let the architecture emerge from purely local decisions.

It’s a challenge when you start a new company, and it’s another challenge when you already have a large legacy, both legacy systems and legacy culture. And nobody talks about the companies where the boss also decided of the rules and which failed.

Are Micro-Services about self-organizing teams?

The same morning at Agile France Emmanuel Gaillot (@egaillot) and Yannick Francois (@ya_f) had proposed a game on microservices where many teams had to build a game from many services, without any formal or central coordination. This was very interesting to experiment how many teams self-organize and how they find ways to avoid coordination as much as possible.

Ingredients for Emergence

How do we find out the few rules that everybody has to abide to and that enable emerging properties out of a large system? How do we deal with a number of parts we can’t keep into our heads?

We don’t have all the answers, but I’d mention 3 main orientations: Purpose, Options, and Automation.

Purpose-Oriented

A rule that will apply to anyone at any scale should be a direct expression of a company goal or a direct consequence of its vision. And if should be abstract enough to be interpreted differently in various contexts.

Option-Oriented

One key example of a general rule is “create (cheap) options for the future”. This one is so general that it could work pretty much everywhere. Perhaps that’s what Jeff had in mind with his API-First principle: “Whatever you build must give us the option to sell it to the outside world if we wish to”.

Conformity Automation

The third aspect is automation. If you really want fault-tolerance, then you should test for it. If you can’t do that by hand on a large system, robots can. This is what Netflix famously achieved with their Simian Army. They literally managed to automate a large part of checking their Architecture conformity with bots:

Latency Monkey induces artificial delays

Conformity Monkey finds instances that don’t adhere to best-practices and shuts them down.

Doctor Monkey taps into health checks to remove unhealthy instances.

Janitor Monkey searches for unused resources and disposes of them on the cloud.

Security Monkey finds security violations or vulnerabilities and terminates the offending instances.

10-18 Monkey detects configuration and run time problems in instances serving customers in multiple geographic regions.

Chaos Gorilla is similar to Chaos Monkey, but simulates an outage of an entire Amazon availability zone.

This automation forces everyone in each team to own the concerns enforced by the Simian Army, which also happen to be direct requirements. That’s a good thing. We’re not that far from a genuine Test-Driven Architecture at this stage.

Closing

Thanks everyone for the discussions in this session!

Read More

Shanghai World Expo 2010

In late May we’ve been to the Shanghai World Expo 2010 for two days. Here are pictures and feedback on this huge event.

As for our overall impression: if you happen to go to Shanghai, you may go see the Expo 2010, but don’t plan to go there just for that. Just as Adam Minter noted in his blog (quite useful to prepare how to visit the expo):

if you’ve traveled internationally, there’s absolutely nothing inside of any of the pavilions that you haven’t seen before

This sums up our visit quite well, I was almost disappointed in fact.

Huge spaces, huge queues

For the two days we’ve been there, the queues were absolutely terrific, up to 3h30 for the Japan pavilion and the Oil 4D movie for example, and just a bit less for the most demanded pavilions. This also means we did not visit them, waiting that long is not our cup of tea.

The 3 days tickets were no longer available, we could only buy tickets for the day, not even for the next day, meaning you have to buy them each morning. Many shops that advertise they sell tickets no longer have any (we are talking 20 days after the official opening).

We also had the feeling that the Expo definitely targets only Chinese citizens, not foreigners, for example the movies have no subtitles.

Big big thanks to all the volunteers in the Expo!  These young girls and boys that speak English are all really nice and helpful, they are the real -and only- face of the Expo, and they make the huge and rather unfriendly outdoor spaces easier to deal with. And you know what? They are not paid, but they don’t even have the right to visit themselves, apart from buying their own ticket… (at least they told me that when I asked)

Our favourite pavilions

The Dutch pavilion is my favourite, with its crazy little town in the air.

The Dutch pavilion
The Dutch pavilion

Each house in this weird village features interesting works by artists and designers, and the ground level is free and offers a place to rest on the fake grass. The principle of the “main street” with a continuous flow of visitors just means you don’t have to wait in a queue, and the view all around is beautiful, especially at the top.

The Australian pavilion is also great, for the humour of the static presentation, and also for the show which is both technically fascinating and is one of the few to address the theme of the Expo: “better city, better life”.

In this pavilion again, the continuous flow of visitors plus a smart way to enter and exit the theater makes the queue not much of a problem.

The General Motors pavilion offers a nice anticipation of what the future of the car might be, at least in countries like China, where everything is possible. Queue time was between 1h down to 20mn when we eventually visited it. I did appreciate how a brand like GM had the courage to present such a technical dream, although it reminds similar ideas from Toyota. The queue time remains reasonable thanks to no less than 4 theaters operating at the same time around a shared stage.

The pavilion of Denmark (also continuous flow, hence no queue) is nice, with its little mermaid and the funny presentation of “how we live in Denmark” (rather weird to be honest), and its crazy and convoluted bench, and again a great view at the top.

Other pavilions with not much queuing that we enjoyed: Vietnam (skinned with bamboo), New Zealand (again a continuous flow, also addresses the theme with the question: “what is better life”), Nepal (smart design in two phases, where not many people actually go to the second phase, another good idea to reduce the queue), Cuba (smart small pavilion, with just a good bar where you can buy a good drink), State Grid (with its immersive video cube), Japan Business (promotional but not too bad), CSSC (exhibition about sea ships now and in the future), and the Czech pavilion where everything is presented on the ceiling.

And a special mention for the North Korea pavilion, self-entitled “Paradise for People“, which is a summum of kitsch with its fountain and a “back to the 80’s” institutional video. Hard to explain that feeling…

And now some pictures in random order.

Architectural madness

If you like architecture, and we do, then on the outside the Expo offers a real-size gallery of architects dreams. Each pavilion is usually built to last 6 months and must be impressive enough yet green enough at the same time. The most common strategy used to that end makes use of a thin skin of something (anything lightweight indeed) to wrap a more conventional or temporary construction.

Pavilions that I find especially beautiful from the outside are the pavilions of Spain (with large petals made of small wood branches), Emirates (all golden metal), South Korea (geometric volumes in laser-cut aluminium), Australia, Germany, UK (always looks like 3D rendering, even for real), Vietnam (bamboo), Denmark, Canada (all skinned in wood), Russia, Estonia (colourful), Mexico (with the wings), Portugal (a nice mix of natural cork and red neon) and Luxembourg (looks like a monster house).

Movies and LED screens everywhere

World Expo are supposed to demonstrate stuff never seen before, or at least impressive and novel techniques and ideas; we must be very bored then, or there is nothing novel left today (which I don’t believe so), or novel things are too complex to explain or too abstract to demonstrate, or perhaps we already know everything thanks to the Internet, but every major pavilion is more or less entered around a show in a theater using immersive techniques such as 3D and every possible kind of screen madness.

LED technology is also the grand winner of the Expo 2010, with giant LED screen everywhere outside and inside each pavilion. Video-projection resists, with some video-mapping techniques, but has already lost the battle.

Cheers,

Pictures taken with a small Canon compact camera, in the usual Shanghai fog.

Read More

Principles for using annotations

Deciding where and how to place the annotations is not innocent. The last thing we want is to create extra maintenance effort because of the annotations. In other words, we want annotations that are stable, or that change for the same reasons and at the same time than the elements they annotate. This article suggests some good practices on how to design annotations.

Annotations are location-based

annotations
A special kind of wall annotation

Language annotations or even good-old xDoclet tags enable to augment program elements with additional semantics, which can be used to configure tools, frameworks or containers.

Configuration is now increasingly done through annotations spread all over the project elements. The key advantage is that the location of the annotation directly references the program element (interface, class etc.), as opposed to configuration files that must reference program elements using awkward and error-prone qualified names: “com.mycompany.somepackage.MyClass”, that are also fragile to refactoring.

For example, we can annotate an entity to declare how it must be persisted, we can annotate a class to declare how it must be instantiated by a Dependency Injection framework, and we can annotate test methods to declare their purpose.

If not placed and thought carefully, annotations can make your code harder to maintain. This happens when annotations are placed at the “wrong” place, or when they introduce undesirable coupling, as we will see.

Dependencies still matter

The question of coupling between elements of the code base is also relevant for annotations. That the coupling is done via an annotation rather than plain code does not make it more acceptable.

We want to group together things that change together. As a consequence, put your annotations on the elements that change with the annotations.

In particular, when the annotation is used to declare a dependency:

Only annotate the dependent element, not the element depended on

If you use Dependency Injection and you want the class MyServiceImpl to be injected everywhere the interface MyService is used, then Guice offers the annotation @ImplementedBy:

@ImplementedBy(MyServiceImpl.class)
interface MyService {... }

This annotation is a direct violation of the advice above, since it makes a pure abstraction (an interface) aware of an implementation, whereas the regular dependency should be the other way round: only the implementation must depend on the interface.

I must however acknowledge that the annotation @ImplementedBy is quite convenient for unit tests anyway, to declare a default implementation for the interface. And it was done just for that, as described in the Guice documentation along with a warning:

Use @ImplementedBy carefully; it adds a compile-time dependency from the interface to its implementation.

Favor intrinsic annotations

annotation2
Another annotation on the wall in Paris

If you want to declare that a service is stateless, you cannot get it wrong: just put the annotation @Stateless on its interface. This is straightforward because being stateless is a truly intrinsic property. It also makes perfect sense to annotate a method argument with the @Nullable annotation, as the capability to expect null or not is really intrinsic to the method.

On the other hand, a service interface does not really care about how it is called. It can be called by another object (local call) or remotely, through some remote proxy. The object is not intrinsically local or remote in itself.

The point is that the decision to consume the service locally or remotely does not belong to the service, in itself, but depends on each client. It actually belongs to each use-case considered.

Said another way, specifying @Remotable or @Local directly on the service would require the developer of the service to guess how it will be used!

Intrinsic properties really belong to the element and therefore are stable, as opposed to use-case-specific properties that vary with the particular case of use. Hence, if we want stable annotations:

Only annotate an element about its intrinsic properties, and avoid annotating about use-case-specific properties.

Annotations as pointcuts

Let’s consider an example of  an accounting service in a bank. Only selected categories of staff can access this service. We can use annotations to declare its security configuration:

@RolesAllowed({"auditor", "bankmanager", "admin"})

The problem with that approach is that it couples the service directly to the user roles defined elsewhere; as a consequence, if we want to change the user roles (we now need to add the user role “externalauditor”), we will have to review every security annotation and change them. On the other hand, if we want to change the access policy (which happen every time a new senior management comes into place), we will also have to change annotations all over the code. How can we improve that?

We can improve the situation by going back to the business analysis on the topic and separate what’s intrinsic and what’s not. In other words, we want to find out how did a BA came up with the security roles for the service.

Rather than specifying the need for security in terms of allowed user roles, we can instead declare the facts: the service is “sensitive” and is about “accounting”:

@Domain(Accounting)
@Confidentiality(Sensitive)
And now a beautiful car annotation
And now a beautiful car annotation

Then we can define expressions that use the declared annotations (which are now stable because they are intrinsic) to select elements (here services) and associate them to allowed user roles. These rules should be defined outside of the elements they apply to, typically in configuration files.

Thanks to the annotations that already define half of the security knowledge, expressing the rules becomes much simpler that doing it method by method. So next time the senior management changes and decides that from now on, “every service that is both Confidentiality(Sensitive) and Domain(Accounting) is only allowed to corporate-officer roles”, you just have to update a few rules expressed in terms of domain and confidentiality level, rather than by listing many method.

The mindset is very similar to AOP where we first define pointcuts, and then attach advices to them. Here we use annotations as an alternative way to declare the pointcuts.

Conclusion

Annotations are very efficient to declare properties about program elements directly on the elements. They are robust versus refactoring and are easier to use than specifying long qualified names in XML files.

To get the best of annotations, we still need to consider the coupling they can introduce, in particular with respect to dependencies. If a class should not know about another, its annotations should not either.

Annotations are much more stable (less likely to change) when they only relate to intrinsic properties of the elements they are located on. When we need to configure cross-cutting concerns (security, transactions etc.) annotations can be used to declare the half of the knowledge that is really intrinsic to the elements, in the same spirit than pointcuts in AOP.

All that leads to the acknowledgement that even though annotations can be of huge value, in practice there is still a case for configuration files to complement them. In this approach, annotations on elements declare what belongs to the elements, while each use-case-specific configuration file makes use of the annotations and as a result is much simpler.

Read More

Formal definitions of architecture

Sounds like a challenge: a paper is proposing an approach for a formal definition of architecture, design, and implementation, among other things.

The link was offered and commented on a post from Ralph Johnson, that everyone knows for being one of the four in the “Gang of Four”.

Though I have not tried to fully understand the tedious details of the formalism in this paper, the introduction of two particular concepts sounds interesting to me.

First the concept of intensional (Vs. extensional):

Traditionally, intensional specifications define a concept via a list of constraints. For example, mathematical concepts are usually defined intensionally. For instance, “A prime number is a number that divides only by itself and by the number 1”. In contrast, the North Atlantic Treaty defines the set of NATO members extensionally, namely, by itemizing its members: United States, United Kingdom, Norway, and so forth. […] We say that a specification is intensional if and only if it has an unbounded number of instances. […] Conversely, all other expressions are extensional.

Only an intensional specification of a lounge could be satisfied by this one.
Only an intensional specification of a lounge could be satisfied by this one.

I love this concept! In my former projects I have been using the pattern Specification (Eric Evans, Martin Fowler) a lot in the domain model, for example to represent asset classes using the Predicate interface from Apache Commons Collection rather than through explicit lists of assets. Now I can name this concept: it is called intensional.

Then the concept of local (Vs. non-local or global):

Locality criterion: What distinguishes architectural from design specifications is that architectural specifications must be met by every extension of the program.

Then a discussion that is not totally new but still food for thoughts:

“Architecture is the decisions that you wish you could get right early in a project.” Why do people feel the need to get some things right early in the project? The answer, of course, is because they perceive those things as hard to change. So you might end up defining architecture as “things that people perceive as hard to change.”

And again, not sure the difference between design and architecture:

“In a sense, I define architecture as a word we use when we want to talk about design but want to puff it up to make it sound important. developers say so.”

I already heard that as well…

Read More

Sharing a common architecture or vision within a team

What do you think about when you hear the word “architecture” about software? Fowler defines it: whoNeedsArchitect “In most successful software projects, the expert developers working on that project have a shared understanding of the design system design. This shared understanding is called ‘architecture.'”.

However for most people the word “architecture” comes full of middleware connotations, hence I consider this word now inappropriate. Let us consider here the concept of a shared “vision” of the design.

First it has to be a shared vision of the domain, for instance the interest rate swaps trading business. Words must have only one meaning, known by everyone, from IT, marketing or support. A shared glossary is key for that.

Then the IT team must share a consistent set of practices: consistent between developers, and consistent between practices themselves. This also means that the set of practices de-facto excludes other practice that conflict with it, and this has to be clearly acknowledged.

Even though any set of practices can be considered arbitrary, everybody in a team must adhere to it, preferably wholeheartedly. Of course the more accepted and documented the practices, the easier it is for everybody to adhere to them.

For example, a team can have a set of practices that includes Agile programming (be prepared for change, no big design upfront), automated testing (unit tests, stress tests), domain-driven analysis as devised by Eric Evans, the use of proven solutions when they exist (hence design and analysis patterns from GoF, POSA, Fowler, Doug Lea…), a bit of functional-programming style (favour immutability and side-effect-free functions, pipe and filters…), and Eclipse default coding conventions.

Some practices that also belong to the shared set of practices might not be directly documented, though they derived from articles, papers, blogs and some personal opinions. Examples of such custom practices that I like a lot include “Align the code with the business” and “think options not solutions”.

“Align the code with the business”, focuses at the smallest level, i.e. at the class and method level. Just every bit of the application must make sense with the domain considered. A class must reflect some relevant concept, concrete or more abstract, of the domain, not of the requirement. The rationale is that while requirements change all the time, the domain concepts are very stable; they are not true or false, they just exist!

“Think options not solutions” refers to the very basics of Agile and XP: “be prepared for change”. It also acknowledges the fact that most of the code in a well done software project provides standalone, encapsulated and reusable building blocks that are then assembled together using a smaller amount of glue code, that can even be reduced by using a dependency-injection framework such as Spring. Typically, a concept from the domain, such as an interpolation algorithm in the finance domain, translates into an Interpolation interface along with several alternate implementations. Most of the time, only one implementation out of the ten possible is really used at a time. However the option to use the other nine alternatives is there. This also means that we can assemble a release of the software using some subset of its building blocks, and assemble another release using another subset, at the same time. One codebase, several possible applications: this is the goal.

In addition to every explicit practices, the existing code base implicitly shows practices that can either be reused just for the sake of consistency, or replaced as widely as possible when they conflict. Prior art, when of good-enough quality, can form a skeleton for a shared vision of how to create software. Again this is arbitrary, but consistent, and consistent design is valuable in itself, and often more than the design decisions themselves.

Of course there is a point where more shared practices become counter-productive, for instance when there are too many of them (more efforts to learn by newcomers, feeling of loosing freedom, lack of flexibility). However when they are standard enough, or becoming standard enough, developer should consider that acquiring these practices a long term, beneficial, investment.

Initially published on Jroller on Monday January 21, 2008

Read More