Recently I started working on a completely new project. The client insisted on having a separate analysis and design phase before the main project starts so we are now writing lots of documents. Although I don’t like this kind of approach, I kind of understand them. It is their first project with our company so they don’t have any reasons to trust us.

Why is it that I don’t like such analysis and design phase? Simply because you can’t get the architecture right just by reading the requirements document. You need to actually see it working to evaluate various trade-offs. That’s why I like the agile approach — doing just enough design when it’s necessary and adapting as new requirements are implemented (or even discovered).

One of the trade-offs I particularly found difficult to verify was if we should use event sourcing pattern as the default persistence approach or stick to classic relational database. Before you tell me that persistence is an implementation detail of each service think about what consequences would lack of standardisation have here. This is a green field project that will consist of about ten services (or bounded contexts if you will). I would argue that in such case having one suboptimal persistence approach that most services would use is much better that having each service use an optimal mechanism and then deal with maintaining five different data stores (think about monitoring, backups, migrations etc.).

Back to the main subject, use event sourcing or not? Or let me rephrase this question: are the business requirements easy to model using event sourcing approach (compared to object-oriented1 approach)?

I cannot answer this question just by looking at the requirements. I need to actually see it working (or not working). I decided to use following set of primitives (based on my previous research):

  • Receptor — transforming events to commands
  • Command handler — transforming commands to aggregate method calls
  • Aggregate — implementing business logic, emitting events for state changes
  • Views — providing lookups based on processed events

Sagas (or more precisely, process managers) are no different from aggregates in this model. With these tools in my hands I was able in just two days build a walking skeleton which can run one of the most complex processes in the new system (matching, fulfilling, charging fees and generating accounting entries for a purchase order). I was actually surprised by my productivity so I spent a moment trying to reflect on what happened.

Event sourcing is great for prototyping because it makes you focus on behaviour rather than state. You start with just enough state to allow for proper behaviour. In my case I deliberately omitted currencies in the first draft. The command-event chains were not affected by them. Only when I thought the model is fine I added currencies to my amounts to make it more realistic. You start with commands and event and then form aggregates where consistency is needed. Later you can easily move aggregates around, remove some and add new, usually without changes in commands and events.

After two days I am pretty sure we’ll stick with event sourcing as the default approach in the system. Of course having specified a default approach does not mean we won’t use any other way when necessary. Hell, we may even use Microsoft CRM in one bounded context!


1 Notice how I contrasted event sourcing with object-oriented. The fact that I use C# to code it does not imply the code is actually object-oriented. In fact event sourcing is a purely functional approach defined by two functions: f(state, command) → event and f(state, event) → state. For details, ask Greg Young or Jérémie Chassaing.

VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)
Event sourcing and prototyping, 5.0 out of 5 based on 3 ratings