I’ve once worked for the IT department of a large restaurant network. They had a secret sauce, and this wasn’t one you could eat.
In a restaurant kitchen, when a customer orders a dish on the menu you have to cook it. You take the recipe with a list of ingredients and their quantities, and you prepare the dish by following the instructions. In theory, if the recipe says you need a quarter of a lemon, then you should be able to cook 4 dishes with one lemon. Unfortunately, when you analyze the actual consumption of ingredients over a day, you realize that in practice you actually made 20 dishes with 6 lemons. That’s 3.3 dish by lemon! Life is not as nice as theory, in real life you have waste. A quarter of lemon falls on the floor and you can’t use it. A lemon was too small so you has to use a “bigger quarter”, and there are only 3 bigger quarters in one lemon. And the last lemon that was used could not be used completely because the number of dishes actually ordered is not a multiple of 4.
The restaurants network I’ve worked for were very good at analytical accounting. This was their secret sauce. They were actually monitoring the usage of lemons, among other things. That was key to their profitability, because businesses like restaurants have quite narrow margins. It was important for them to know better the gap between the recipe and the actual cooking of the dishes.
It happens that tracking the gaps between what was planned – the recipe, and what is actually built – the cooking, is important for lots of businesses in all domains.
In manufacturing, you have a bill of material, the BOM, that is necessary to build a product. The BOM states that you need 12 screws to close the enclosing. However in practice you may lose one screw and end up consuming 13, not 12. If the tracking report shows this situation happens often and if the screws are expensive enough, then some engineers will want to improve something on the production line to prevent that. You may automate, or make the screwdrivers magnetic.
In hardware electronic manufacturing you build silicon waffles that will make thousands of chips. This is the theory, before the quality control where you discover how many of them are dysfunctional. This kind of manufacturing is expensive, so you want to monitor that.
The point so far is that mature businesses have lots of competitors, and they often compete on continuous optimizations of the delivery process. You need tracking for that. That’s a recurring business problem, and you need software for that.
So how do we design such software in this situation?
The answer is that you most likely have two distinct Bounded Contexts: one for describing the plan, and another for tracking the gaps between the actual and the plan. And even if the two contexts often seem to deal with the same real-life concepts, they have in fact no reason to look similar.
The one context approach
At first, “the recipe from the book” and “the recipe as I’ve actually done it” look just the same thing. That’s why a common approach is to mix both models into one. I think this is a poor hack.
For example project management tools focus on the theoretical plan, and then they add extra fields to track the time actually burnt directly on the elements of the plan. It may work as long as it’s very simple. The more we move forward into complexity, the more it won’t make sense to keep the two models aligned. With a TODO list, “Done!” might be enough, but as the tracking becomes more complex, it will grow its own tracking language.
Modeling the recipe and the tracking of the actual cooking of the dish look similar, but they are totally different in their intentions. The recipe aims at telling how to prepare the dish, including instructions on how to stir or how to cut the vegetables. The tracking focuses on waste, or errors, or improving the purchase process or the supply chain management. That’s a strong reason to call for a different point of view.
For tracking you may not be able to observe all details. The recipe talks about quarters of lemon, but a quarter of a lemon does not exist on the market. You can only measure the number of lemons you bought, and the number of remaining lemons at the end of the day.
What actually happens during cooking, during a manufacturing process or during a project can be totally different from the plan. Perhaps there was no more lemon, so the cook had to exceptionally replace with some lemon juice, an ingredient that is not even in the recipe. A good tracking model should accommodate the tracking of this kind of event.
On a similar note, for steering purposes, a coarse level of granularity is often enough. You can neglect details from the process. Perhaps a weekly or monthly inventory of the remaining ingredients is enough. Tracking the actual consumption of ingredients dish by dish would be a total waste of time!
Modeling the plan, the bill of material, the manufacturing steps etc. is not the challenging part of the Actual vs Plan Gap Tracking relationship.
Typically the tracking context needs its own distinct model, a totally different one than the planning model. For tracking cargos shipment, for example, you’d probably track loading and unloading events at various ports. The tracking model would be a journal of events. For tracking the consumption of materials, the tracking model would be a time-stamped snapshot of the current inventory.
The tracking model may link to the plan. This case is a bit like the Knowledge Level pattern (Fowler): the plan is the Knowledge Level, and the tracking of the actual production would be the Operation Level. Here the knowledge Level defines the ideal case, but the actual behavior at the operation level will regularly diverge from the ideal, because this discrepancy is indeed what we want to track.
The tracking can also live in its own bubble, with no link to the plan. If that’s the case, a separate reconciliation mechanism will do the comparison to highlight the gaps.
A developer example
As usual, as developers we are already familiar with this actual vs plan dichotomy: it’s just called testing.
A test defines the plan, thats the expected value, or the expected behavior on a mock. A test also observes the actual result of the code and tracks all gaps between them in the test results report. It’s obvious that the model of the plan and the model of tracking the actual results are different, and use a totally different language.
In this area, simulation testing also uses a reconciliation mechanism a posteriori to track the discrepancies.
Another example is simply software project planning and tracking. With software projects we know very well that forcing a match between plan and execution is a really dumb idea.
The key thing in this stereotypical relationship is to be fully aware of the two distinct contexts. Avoid the trap of mixing both contexts, unless it is a deliberate and conscious decision. Usually recognizing the two contexts will lead to the conclusion that distinct models are needed, and this will make everything simpler.