Patterns for using custom annotations

If you happen to create your own annotations, for instance to use with Java 6 Pluggable Annotation Processors, here are some patterns that I collected over time. Nothing new, nothing fancy, just putting everything into one place, with some proposed names.

annotation

Local-name annotation

Have your tools accept any annotation as long as its single name (without the fully-qualified prefix) is the expected one. For example com.acme.NotNull and net.companyname.NotNull would be considered the same. This enables to use your own annotations rather than the one packaged with the tools, in order not to depend on them.

Example in the Guice documentation:

Guice recognizes any @Nullable annotation, like edu.umd.cs.findbugs.annotations.Nullable or javax.annotation.Nullable.

Composed annotations

Annotations can have annotations as values. This allows for some complex and tree-like configurations, such as mappings from one format to another (from/to XML, JSon, RDBM).

Here is a rather simple example from the Hibernate annotations documentation:

@AssociationOverride( 
   name="propulsion", 
   joinColumns = @JoinColumn(name="fld_propulsion_fk") 
)

Multiplicity Wrapper

Java does not allow to use several times the same annotation on a given target.

To workaround that limitation, you can create a special annotation that expects a collection of values of the desired annotation type. For example, you’d like to apply several times the annotation @Advantage, so you create the Multiplicity Wrapper annotation: @Advantages (advantages = {@Advantage}).

Typically the multiplicity wrapper is named after the plural form of its enclosed elements.

Example in Hibernate annotations documentation:

@AttributeOverrides( {
   @AttributeOverride(name="iso2", column = @Column(name="bornIso2") ),
   @AttributeOverride(name="name", column = @Column(name="bornCountryName") )
} )

annotationbis

Meta-inheritance

It is not possible in Java for annotations to derive from each other. To workaround that, the idea is simply to annotate your new annotation with the “super” annotation, which becomes a meta annotation.

Whenever you use your own annotation with a meta-annotation, the tools will actually consider it as if it was the meta-annotation.

This kind of meta-inheritance helps centralize the coupling to the external annotation in one place, while making the semantics of your own annotation more precise and meaningful.

Example in Spring annotations, with the annotation @Component, but also works with annotation @Qualifier:

Create your own custom stereotype annotation that is itself annotated with @Component:

@Component
public @interface MyComponent {
String value() default "";
}
@MyComponent
public class MyClass...

Another example in Guice, with the Binding Annotation:

@BindingAnnotation
@Target({ FIELD, PARAMETER, METHOD })
@Retention(RUNTIME)
public @interface PayPal {}

// Then use it
public class RealBillingService implements BillingService {
  @Inject
  public RealBillingService(@PayPal CreditCardProcessor processor,
      TransactionLog transactionLog) {
    ...
  }

Refactoring-proof values

Prefer values that are robust to refactorings rather than String litterals. MyClass.class is better than “com.acme.MyClass”, and enums are also encouraged.

Example in Hibernate annotations documentation:

@ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE}, targetEntity=CompanyImpl.class )

And another example in the Guice documentation:

@ImplementedBy(PayPalCreditCardProcessor.class)

Configuration Precedence rule

Convention over Configuration and Sensible Defaults are two existing patterns that make a lot of sense with respect to using annotations as part of a configuration strategy. Having no need to annotate is way better than having to annotate for little value.

Annotations are by nature embedded in the code, hence they are not well-suited for every case of configuration, in particular when it comes to deployment-specific configuration. The solution is of course to mix annotations with other mechanisms and use each of them where they are more appropriate.

The following approach, based on precedence rule, and where each mechanism overrides the previous one, appears to work well:

Default value < Annotation < XML < programmatic configuration

For example, the default values could be suited for unit testing, while the annotation define all the stable configuration, leaving the other options to  configure for deployments at the various stages, like production or QA environments.

This principle is common (Spring, Java 6 EE among others), for example in JPA:

The concept of configuration by exception is central to the JPA specification.

Conclusion

This post is mostly a notepad of various patterns on how to use annotations, for instance when creating tools that process annotations, such as the Annotation Processing Tools in Java 5 and the Pluggable Annotations Processors in Java 6.

Don’t hesitate to contribute better patterns names, additional patterns and other examples of use.

EDIT: A related previous post, with a focus on how annotations can lead to coupling hence dependencies.

Pictures Creative Commons from Flicker, by ninaksimon and Iwan Gabovitch.

Read More

Principles for using annotations

Deciding where and how to place the annotations is not innocent. The last thing we want is to create extra maintenance effort because of the annotations. In other words, we want annotations that are stable, or that change for the same reasons and at the same time than the elements they annotate. This article suggests some good practices on how to design annotations.

Annotations are location-based

annotations
A special kind of wall annotation

Language annotations or even good-old xDoclet tags enable to augment program elements with additional semantics, which can be used to configure tools, frameworks or containers.

Configuration is now increasingly done through annotations spread all over the project elements. The key advantage is that the location of the annotation directly references the program element (interface, class etc.), as opposed to configuration files that must reference program elements using awkward and error-prone qualified names: “com.mycompany.somepackage.MyClass”, that are also fragile to refactoring.

For example, we can annotate an entity to declare how it must be persisted, we can annotate a class to declare how it must be instantiated by a Dependency Injection framework, and we can annotate test methods to declare their purpose.

If not placed and thought carefully, annotations can make your code harder to maintain. This happens when annotations are placed at the “wrong” place, or when they introduce undesirable coupling, as we will see.

Dependencies still matter

The question of coupling between elements of the code base is also relevant for annotations. That the coupling is done via an annotation rather than plain code does not make it more acceptable.

We want to group together things that change together. As a consequence, put your annotations on the elements that change with the annotations.

In particular, when the annotation is used to declare a dependency:

Only annotate the dependent element, not the element depended on

If you use Dependency Injection and you want the class MyServiceImpl to be injected everywhere the interface MyService is used, then Guice offers the annotation @ImplementedBy:

@ImplementedBy(MyServiceImpl.class)
interface MyService {... }

This annotation is a direct violation of the advice above, since it makes a pure abstraction (an interface) aware of an implementation, whereas the regular dependency should be the other way round: only the implementation must depend on the interface.

I must however acknowledge that the annotation @ImplementedBy is quite convenient for unit tests anyway, to declare a default implementation for the interface. And it was done just for that, as described in the Guice documentation along with a warning:

Use @ImplementedBy carefully; it adds a compile-time dependency from the interface to its implementation.

Favor intrinsic annotations

annotation2
Another annotation on the wall in Paris

If you want to declare that a service is stateless, you cannot get it wrong: just put the annotation @Stateless on its interface. This is straightforward because being stateless is a truly intrinsic property. It also makes perfect sense to annotate a method argument with the @Nullable annotation, as the capability to expect null or not is really intrinsic to the method.

On the other hand, a service interface does not really care about how it is called. It can be called by another object (local call) or remotely, through some remote proxy. The object is not intrinsically local or remote in itself.

The point is that the decision to consume the service locally or remotely does not belong to the service, in itself, but depends on each client. It actually belongs to each use-case considered.

Said another way, specifying @Remotable or @Local directly on the service would require the developer of the service to guess how it will be used!

Intrinsic properties really belong to the element and therefore are stable, as opposed to use-case-specific properties that vary with the particular case of use. Hence, if we want stable annotations:

Only annotate an element about its intrinsic properties, and avoid annotating about use-case-specific properties.

Annotations as pointcuts

Let’s consider an example of  an accounting service in a bank. Only selected categories of staff can access this service. We can use annotations to declare its security configuration:

@RolesAllowed({"auditor", "bankmanager", "admin"})

The problem with that approach is that it couples the service directly to the user roles defined elsewhere; as a consequence, if we want to change the user roles (we now need to add the user role “externalauditor”), we will have to review every security annotation and change them. On the other hand, if we want to change the access policy (which happen every time a new senior management comes into place), we will also have to change annotations all over the code. How can we improve that?

We can improve the situation by going back to the business analysis on the topic and separate what’s intrinsic and what’s not. In other words, we want to find out how did a BA came up with the security roles for the service.

Rather than specifying the need for security in terms of allowed user roles, we can instead declare the facts: the service is “sensitive” and is about “accounting”:

@Domain(Accounting)
@Confidentiality(Sensitive)
And now a beautiful car annotation
And now a beautiful car annotation

Then we can define expressions that use the declared annotations (which are now stable because they are intrinsic) to select elements (here services) and associate them to allowed user roles. These rules should be defined outside of the elements they apply to, typically in configuration files.

Thanks to the annotations that already define half of the security knowledge, expressing the rules becomes much simpler that doing it method by method. So next time the senior management changes and decides that from now on, “every service that is both Confidentiality(Sensitive) and Domain(Accounting) is only allowed to corporate-officer roles”, you just have to update a few rules expressed in terms of domain and confidentiality level, rather than by listing many method.

The mindset is very similar to AOP where we first define pointcuts, and then attach advices to them. Here we use annotations as an alternative way to declare the pointcuts.

Conclusion

Annotations are very efficient to declare properties about program elements directly on the elements. They are robust versus refactoring and are easier to use than specifying long qualified names in XML files.

To get the best of annotations, we still need to consider the coupling they can introduce, in particular with respect to dependencies. If a class should not know about another, its annotations should not either.

Annotations are much more stable (less likely to change) when they only relate to intrinsic properties of the elements they are located on. When we need to configure cross-cutting concerns (security, transactions etc.) annotations can be used to declare the half of the knowledge that is really intrinsic to the elements, in the same spirit than pointcuts in AOP.

All that leads to the acknowledgement that even though annotations can be of huge value, in practice there is still a case for configuration files to complement them. In this approach, annotations on elements declare what belongs to the elements, while each use-case-specific configuration file makes use of the annotations and as a result is much simpler.

Read More

Pattern grammar for the variant problem

For tools to be aware of patterns, the patterns must be formalized, at least partially. At this point I must quote Gregor Hohpe to clarify my thoughts, as I strongly agree with his skipticism:

Typically, when people ask me about “codifying” or “toolifying” patterns my first reaction is one of skepticism. Patterns are meant to be a human-to-human communication mechanism, not a human-to-machine mechanism. After all, I have pointed many people to the fact that a pattern is not just the piece of code in the example section. It’s the context-problem-forces-solution combination that makes patterns so useful.

Patterns link together a problem part to a solution part. This is expressed within the limits a stated context outside of which it is no more applicable. Patterns also emphasize the forces involved, that you must consider to decide how and whether or not to apply the pattern.

Patterns litterature usually describes examples of application of patterns. In your project, you will have to do more work to adapt the pattern solution to your exact need. A pattern solution may be stretched a lot, but the pattern remains as long as humans still recognize its presence. Every different way of applying a pattern is called a pattern variant.

Addressing the variant problem

Formal descriptions of anything human is too restrictive, and this is especially true for patterns which are the product of human analysis, in that they resist simple formalization. However if we focus on sub-parts of patterns, it becomes easier to formalize them, at least for their solution part.

For example a design pattern that uses (in its solution part) some form of inheritance admits several variants. At first, the pattern solution seems to resist against its formalization. However if we now focus on the inheritance part only, we can enumerate every possible alternative for it. For example we can use:

  • interface and classes that implement it
  • abstract class and classes that extend it
  • concrete class provided it is not final (assume we’re in Java), so that it can be extended

Notice that each alternative is a solution to the same problem “How to realize some

Tree structure in volume (Milano International Fair 2009)
Tree structure in volume (Milano International Fair 2009)

form of inheritance”. We can say that each alternative is indeed a pattern, and by the way Mark Grand already described them in Patterns in Java Volume 1. These patterns are easy to formalize, as they can be precisely described in terms of programming language elements.

How can we split a pattern into parts? The idea is to identify the areas that are fragile with respect to the variant problem in the solution part of a pattern, and to consider them as lower-level problems embedded inside the bigger pattern.

In the example before, the problem was to achieve “some form of inheritance”, and we listed three patterns that address this problem.

Provided it can be split into sub-parts (hence into smaller problems), any pattern solution can be formalized by recursively formalizing its sub-parts. If a sub-part cannot be easily made formal, then it can be split again into sub-parts, and so on until each sub-part can.

Given a pattern solution that we want to formalize:

  • If it can be described formally directly, then we are done (terminal)
  • If it cannot be fully described formally, then extract the problematic sub-parts into sub-problems, then find every pattern that addresses each sub-problem, and formalize their solution part

We can then represent a pattern as a tree of smaller patterns, where the solution part of each patter is connected to the problem part of another pattern.

From pattern language to pattern grammar

In the car, some parts can be replaced by other alternate parts that play the same contract (e.g. the wheels)
In the car, some parts can be replaced by other alternate parts that play the same contract (e.g. the wheels)

When the pattern is applied into the code, at each node in the tree there is actually a selection of which variant of the sub pattern to use. As such, each selected pattern represents an atom of decision in the design process.

It then appears that we have a form of grammar for patterns, where there are terminal patterns solutions T (easier to formalize in term of programming language elements), non-terminal parts of patterns solutions N (that cannot be easily formalized but that can formally ask for help to solve their sub-problems), and where the production rules P are nothing but the patterns themselves:

Patterns are production rules that link:
elements of N (the problems) to elements of (N Union T)

Conclusion

I have suggested quickly a way to formalize patterns solutions in spite of their fragility with respect to their variants. This approach identifies patterns as production rules in some grammar over the set of patterns considered.

This perspective is well-suited for tools to work on patterns in real-world projects, where the patterns are indeed applied in many variant forms. The problem of this approach is that every known pattern that is variant-fragile must be reconsidered and have its solution split into a formal part and one or several sub problems to be addressed by specific, lower-level patterns, which themselves must be formalised in the same way.

It is essential that for every problem (“intent”) we can enumerate every pattern that addresses it. Intents can be also classified as a taxonomy, where some intents are specialized versions of more generic intents.

This approach does not claim to formalize the full potential of patterns, it only aims at enabling tools to understand patterns that are already there so that to assist the developers for various tasks.

Read More

Patterns express intents

Patterns represent a couple (intent, solution); sometime they refer to a solution, more often they essentially represents an intent, independently of its solution.

Sometimes the solution part of patterns includes a trick or a workaround to overcome the limits of a language, but patterns cannot be reduced to that trick. Indeed, a very important role of patterns (not only design patterns but patterns in general ) is that they represent stereotypes of intents.

A matter of intent

Therefore, it does not really matter if the Strategy pattern can be expressed using a Java interface, a C++ functor or a first-class function: it remains a Strategy because this is just what we want: “Strategy lets the algorithm vary independently from clients that use it“.

Is the intent of this book holder clear?
Is the intent of this book holder clear?

Another similar pattern is the Command pattern, which intent is:” Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations.” Here the intent talks about ‘object’ because it was written for an object-oriented context, however it can easily be made generic if you think ‘handle on function’ (or closure etc.) instead of object. Again, even if first class functions such as delegates in C# can achieve this goal, they do not replace the need to declare the precise intent: “you want to parameterize clients with different requests, queue or log requests, and support undoable operations.”  So in some sense, just using a functor without declaring that the intent is to do a Strategy or a Command is like using untyped variables: you are supposed to know what you are doing, but it is implicit.*

Yet another example with the Visitor pattern and its intent: “Represent an operation to be performed on the elements of an object structure. Visitor lets you define a new operation without changing the classes of the elements on which it operates.” This is typically achieved through double-dispatch in languages lacking multimethods, but regardless of how it is implemented the intent remains, and this is what matters most.

Generic Vs. specialized intents

For example, the intent of the generic Proxy pattern defined in a paper from James Noble:

The Proxy pattern is used to “Provide a surrogate or placeholder for another object the Subject to control access to it”. A Proxy object provides the same interface as the original Subject object, but intercepts any messages directed to the Subject. A Proxy object can therefore be used in place of the Subject by a client which is designed to access the Subject, without the client being aware the Subject has been replaced by a Proxy.

This intent can then be specialized for various purposes, leading to several specialized patterns:

What's your intent if you buy that? (yes this is brand new furniture for sale)
What's your intent if you buy that? (yes this is brand new furniture for sale)
  • Remote Proxy: provides a local representative to an object that is only available on a remote machine
  • Protection Proxy: checks the access rights before directing to the original object
  • Virtual Proxy: creates expensive object only on demand so that they are created only when necessary
  • Cache Proxy: an object representative that remembers the result of calling the methods of an object to avoid directing subsequent call again to this object
  • Counter Proxy (smart pointer management), etc.

We say that these patterns are specializations of the Proxy pattern. The main Proxy pattern introduces the common solution—providing a placeholder for an object. Every specialized proxy pattern is a special kind of Proxy: A “Protection Proxy” is a special kind of “Proxy”. The specialization here only deals with the Intent part of the patterns.

Conclusion

Now that functional languages are getting more attention, it becomes fashionable to question the usefulness of patterns: “Scala does that without the need for patterns”. I agree Scala is great, I disagree this argument. Patterns are first of all signs to denote intents, even if they can do more.

References

Patterns as Signs, James Noble and Robert Biddle, Victoria University of Wellington, New Zealand

Classifying Relationships Between Object-Oriented Design Patterns, James Noble, Microsoft Research Institute, Macquarie University

* By the way, how to achieve “undoable operations” by using first class functions in an elegant way? this would require passing two functions always together: do() and undo()

Read More

Toward smarter dependency constraints (patterns to the rescue)

Low coupling between objects is a key principle to help you win the battle against software entropy. Making sure your dependencies are under control matters. Several tools can enforce dependencies restrictions, such as JDepend. However in a real project with many classes, packages and modules, the real issue is how to decide and configure the allowed and forbidden dependencies. Per class? Per package? Per Module? Based on gut feeling? Is there a theory for that?

Of course, in a layered architecture, the layers specify the dependencies. This is not bad, but I am sure we can do better.

Smarter dependencies

To go further, I suggest expanding our vocabulary of concepts. In OO languages such as Java, everything is a class (or an interface), grouped into packages. Such classification is not really helpful. Fortunately, several books provide ready to use vocabularies in the form of patterns languages (not only design patterns, but patterns in general). Some of these patterns are foundations on which rules to manage dependencies can be proposed.

Disclaimer: the dependencies rules suggested below are hypothesises to be debated and verified against a corpus of actual projects, I would be happy to be given counter-examples and counter arguments.

The child really depend upon the mother
The child really depend upon the mother

Domain Driven Design

The book Domain Driven Design by Eric Evans defines a rich vocabulary of concepts used in every application, and we can leverage that vocabulary to propose some dependencies principles between them:

  • ValueObject never depends upon Entity nor Services
  • Entities should not depend upon Services (maybe not a hard rule)
  • Generic SubDomain should not depend upon Core Domain
  • Core Domain should not depend upon Cohesive Mechanism (the “What” should not depend upon the “How”)
  • Domain Layer should not depend on any infrastructure code
  • Abstract Core module never depends on its specialized or implementation modules

Analysis Patterns

The book Analysis Patterns by Martin Fowler also provides patterns as a richer vocabulary, from which we could propose:

  • Elements from a Knowledge Level should not depend upon elements from the corresponding Operation Level

I did not find that rule written in the book but every example appears to support it. Considering that classes and subclasses in usual OOP are a special case of Knowledge Level built-into the language, this would lead to:

  • Abstraction never depends upon their Implementations

which is similar to the second part of the Dependency inversion principle by Robert C. Martin:

Abstractions should not depend upon details. Details should depend upon abstractions.

Since many analysis patterns in the Analysis Patterns book involve the Knowledge Level pattern, this single dependency rule already applies to many analysis patterns: Party Type Generalizations, Measurement, Observation, Protocol, Associated Observation, Measurement Protocol etc. The pattern Quantity can be seen as a specialized ValueObject (see Domain Driven Design above) hence should also not depend on any Entity nor Service.

Design Patterns

The book Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma et al. presents the classic design patterns. These patterns define participants which are named.  In the pattern participant ignorance principle I discussed the concepts of ignorant Vs. dedicated participants within a pattern, and their consequences for dependencies:

  • Ignorant pattern participants should never depend on dedicated participants
  • Pattern participant never depend on the “Client” participant
  • For each ConcreteX participant, the corresponding abstract X never depends on it (Abstraction never depends upon their Implementations)

In practice, this means:

  • In the Adapter pattern, the Adaptee should not depend upon the Adapter, and the Target should depend upon nothing
  • In the Facade pattern, the sub systems should not depend upon the Facade
  • In the Iterator pattern, the Aggregate should not depend upon the Iterator; however every Java collection is a counter example as they contain their own ConcreteIterator.
  • In creational patterns (Abstract Factory, Prototype, Builder etc.), the Product and ConcreteProduct should not depend on the dedicated participant that does the allocation (the Factory etc.)
  • And so on for other patterns, some of which being already discussed in the pattern participant ignorance principle.

In short, if we look at the design patterns as a set of types with inheritance/implementation, invocation/delegation and creation relationships between them, the dependencies should not flow in the reverse direction of the relationships; in other words, using UML arrows, the dependencies should only be allowed in the direction of the arrows.

Addiction to sugar is a kind of dependency
Addiction to sugar is a kind of dependency

Patterns of Enterprise Architecture

In the book Patterns of Enterprise Application Architecture by Martin Fowler, the Separated Interface Pattern proposes a way to manage dependencies by defining the interface between packages in a separate package from its implementation. This is similar to the Dependency inversion principle, as discussed here, which states:

A. High-level modules should not depend upon low-level modules. Both should depend upon abstractions.

By the way this is also very similar to the recommendation in Domain Driven Design:

Abstract Core module never depends on its specialized or implementation modules.

Finally, in the spirit of UML stereotypes that we sometimes put on packages to express their intent:

  • Utils never depends on anything but other Utils

What for?

If we manage to make every use of the above pattern explicit in the source code, for instance using Java annotations or simply Javadoc tags, then it becomes possible for a tool to deduce dependencies constraints and automatically enforce them.

Imagine, just add @pattern ValueObject in your Javadoc comment, and voila! A tool is now able to deduce that if you happen to import anything but basic java.* you must be warned.

Of course the fine tuning of the default behavior can take some time (do we accept that ValueObjects may depend upon low level utils like StringUtils? Probably yes), but the result will at least be stable regardless of the refactorings.

Given the existing variety of patterns over there, I am confident that just any class or interface within a project can be declared as being a participant in at least one pattern, and have therefore its dependency constraints deduced at the same time.

Read More

Software development is more than mastering syntax

When a junior developer joins our team, it is interesting to realize how mastering language syntax and API is just a small part of the skills that matter.

Just after the syntax and API knowledge (actually knowing where to find what you want in the API is enough), there are a few other skills you definitely must know.

First is unit test. It is not difficult to begin with that, it is not that easy to tune tests so that they dont depend on too many things, to save time when refactoring. Also mocking objects is the next step.

Mastering dependencies is another key to successful software development. Dependency Injection is a buzzword for a very simple idea, that everyone can apply only by being aware to do the ”new” where is it most convenient to change, using valued constructors or setters.

Design patterns are of course an essential tool to build flexible software. Everytime there is a design problem, design and analysis patterns often provide a good solution. They also guide you to good object-oriented thinking. However, when there is no design problem, there is no need for patterns.

The last very important thing to understand in software development is to look for simplicity, always. It is easy to build a complicated solution, it is difficult to build a simple one. But a simple solution has so many advantages… To simplify a design, domain analysis is a great tool; the more you understand the domain, the best you can align your design to it, and the simplest it will be. By separating orthogonal things or by unifying similar things you can also simplify.

There are also things that we must unlearn, like doing design first and testing at the very end of a project; it is most of the time also useless to create extensive UML diagrams, good design can hardly come from diagrams, if you are not close enough to the code you may miss the point and have a flawed design, even though it looks great as a diagram.

It is counter-intuitive to beginners to focus on anything but syntax. Therefore, unless you have a colleague to introduce you to other concerns, or unless you are really motivated to look for advises on the Internet, you can live a long time without even knowing there is more than that…

Initially published on Jroller on Tuesday June 28, 2005

Read More