Posts tagged DDD

Modelling accounts, event driven part 2

In the previous post I discussed the various approaches for modelling account operations. The outcome of this discussion was the insight that, when using event sourcing, aggregates are the derived concept with the primary concept being the domain invariants. I used money transfer as an example. As Maksim pointed out, in such scenario usually there is another invariant which was missing in my model

The balance of an account should not be less than zero

It is clear that this invariant cannot be satisfied by our Transaction aggregate. We must re-introduce the Account as an aggregate. By the way, what do you think about the new requirement? It is pretty specific, isn’t it? This might be a sign that we are missing a more general concept in the model. How about this

Account can reject participation in a transaction

This is called in Domain-Driven Design a refactoring towards deeper insight. Because of on it we can implement now all kinds of accounts, e.g. credit-only accounts, debit-only accounts (useful on system boundary), credit card accounts (accounts which balance cannot be positive) and many more and we have much better understanding of the domain concepts.

Let’s try to sketch the event-command chain for a successful transaction that would honor both consistency requirements

  1. Transfer command creates new instance of a Transaction aggregate. As a result, a Prepared event is emitted.
  2. Account receptor reacts on the Prepared event emitting two commands, one for each account involved in a transaction: PrepareDebit and PrepareCredit
  3. Destination Account processes PrepareCredit command based on its internal state and rules. In case of normal (non-negative balance) account, it does nothing. As a result, CreditPrepared event is emitted.
  4. Source Account processes PrepareDebit command based on its internal state and rules. In case of normal (non-negative balance) account, it decrements the available funds value. As a result, DebitPrepared event is emitted. Alternatively, DebitPreparationFailed event can be emitted if account rejects participating in the transaction.
  5. Transaction receptor reacts on CreditPrepared and DebitPrepared events by emitting appropriate notification commands.
  6. Transaction aggregate processes notification. If any account rejected the operation, it immediately emits TransactionCancelled event. Accounts which successfully prepared themselves for this transaction should react on this event by undoing any changes. When any account completes preparation, appropriate Confirmed event is emitted to update Transaction state. If both accounts confirmed, a pair of Debited/Credited events are emitted in addition to the Confirmed event. All three events are emitted in one transaction ensuring that funds transfer is done atomically.
  7. Account receptor reacts on Debited and Credited events by emitting appropriate notification commands.
  8. Destination Account processes the notification command based on its internal state and rules. In case of normal (non-negative balance) account, it increments both available funds and balance values.
  9. Source Account processes the notification command based on its internal state and rules. In case of normal (non-negative balance) account, it decrements the value of balance.

The double entry accounting rule is satisfied by Transaction aggregate via emitting both Credited and Debited events in one transaction while veto requirement (a generalization of non-negative balance invariant) is satisfied by using a variation two-phase commit protocol. Despite the fact that the whole interaction is asynchronous (events causing commands to be sent to other aggregates) the externally visible state of the system is always consistent: either there are no entries or there is a pair of Credited and Debited entries. The consequence of eventual consistency is the fact that, in theory, if the settlement phase is lagging for some reason the available funds value is lower than it could be. It means that there might be some false positive rejections which is of course not a desirable thing. But this shows how important monitoring is in asynchronous systems.

VN:F [1.9.22_1171]
Rating: 4.5/5 (2 votes cast)

Modelling accounts, event driven

I’d like to share results of my research on modelling an account. It is based mainly on this discussion on CQRS mailing list and specifically by Greg’s example. I’ll start with a slightly naïve model.

Account as an aggregate

Let’s model Account as an aggregate. What interface would an Account need? How about Debit and Credit? Here’s a sketch of how a command handler would look like for a TransferFunds command

var uow = new UnitOfWork();
var from = accountRepository.GetAccountForNumber(fromNumber);
var to = accountRepository.GetAccountForNumber(toNumber);



If you don’t like this approach you are right. The problem with this model is it requires two aggregates to be modified in one transactions. This violates the good practice of treating aggregate boundaries as transaction and consistency boundaries. Also, many event stores would not allow even allow you to create such transaction in the fist place. So, what can we improve?

Eventual consistency

We can take advantage of the idea of eventual consistency. Let’s refactor the command handler code:

var from = accountRepository.getAccountForNumber(fromNumber);
from.TransferTo(amount, toAccount);

The effect of this code is a AccountDebited event published by source account aggregate. This event then needs to be dispatched to a handler that would look like this:

var to = accountRepository.GetAccountForNumber(evnt.DestinationAccount);

Can you spot a problem here? While eventual consistency is generally a great tool, as all great tools it can be easily misused. The code above ensures that eventually, given infinite time, the transaction will be balanced. Can we wait so long? Usually it will happen very quickly but sometimes, under heavy load, it can take a while. Can we tolerate this? The answer, as always, is, it depends. We need to go back to the business problem we are solving and ask ourselves a question if we are not violating aggregate’s invariants by introducing eventual consistency.

In our example the invariant states that at any point in time credit and debit operations must balance (sum to zero). Clearly, after the command is processed but before the event processing happens, this rule is violated. What can we do with it?

Event driven modelling

Instead of trying to identify aggregates, let’s start with commands and events. Here’s what happens in the system when we want to transfer money:

The domain expert tells us that the rule we must comply with states that AccountDebited and AccountCredited events always happen together and they always balance to zero. What aggregate can we create here so that the rule is honoured? Clearly not Account. The aggregate we are looking for is Transaction. The interaction can be pictured as

The new command handler code would look like this:

var tx = new Transaction();
tx.Post(amount, fromAccount, toAccount);

and here’s the brand new Transaction aggregate:

public void Post(decimal amount, string fromAccount, string toAccount)
   this.Apply(new AccountDebited(amount, fromAccount));
   this.Apply(new AccountCredited(amount, toAccount));

Now the following question arises: if account is not an aggregate, what is it? Clearly, end users want to see accounts, balances and history records. The answer is provided by CQRS approach. Accounts exist only on the read side of the system. There should probably be a couple of them, one for each screen.


The takeaway from this a bit lengthy blog post is, if you are using Event Sourcing approach you should not start by identifying the aggregates by rather by identifying sequences of commands and events. Only then you can ask business expert what are the consistency boundaries in particular sequence. The result will be the definition of an aggregate.

VN:F [1.9.22_1171]
Rating: 5.0/5 (4 votes cast)

Greg Young’s Advanced Class

Last week, from Wednesday to Friday I attended Greg Young’s new advanced class. The event took place in Pod Wawelem hotel, 50 meters away from the Castle. I haven’t been to Greg’s previous training but I heard it was about Domain-Driven Design and CQRS. I was expecting something similar, but more in-depth. It turned out I was wrong but for sure not disappointed. The new content is great! Here’s a brief summary.

Day 1 – Alarm clock

We spend almost whole day discussing one topic — time. We learnt how hard it is to unit test time-dependent logic. Those who tried putting Thread.Sleep or ManualResetEvent in a test know how painful it is. Then we discovered that every time-dependant piece of logic (e.g. if cook does not respond in 5 minutes then…) can be transformed into order-of-messages problem. Instead of waiting for some period of time, we send a message to our future selves that will inform us we should take some action. I don’t have to convince you that sending and receiving messages is super easy to test, right?

To prove it, we spent some time doing one of well-known katas. I won’t tell which one to not spoil a surprise.

The last assignment of the day was a modelling task. We were modelling a business process of a restaurant. The funny/new/inspiring part of this assignment was that we were doing it by trying to figure out how would a Word template for document describing an order look like and how it would be passed between various people in a restaurant. We analysed what parts of the document are filled by each person and in what order.

Day 2 – Restaurant

We spent whole day implementing the solution for the restaurant according to the model we created a day before. We went through quite a few patterns from Enterprise Integration Patterns:

  • document message – that’s how we represented our order
  • adapter – waiter
  • enricher – cook and assistant manager, adding, respecively, preparation time and prices
  • competing consumer – when we have more then one cook
  • message dispatcher – a better (does not require locks) solution for load balancing cooks
  • message broker – centralize setup of queues

Day 3- More restaurant & Event Store

Before lunch we managed to cover some more advanced messaging patterns, namely:

  • correlation id – how we trace orders
  • routing slip – move responsibility for order flow from broker to a dedicated entity to add flexibility (e.g. normal flow & dodgy customer flow)
  • process manager – when routing slip is not enough (e.g. smart failure handling)
  • publish-subscribe – how to distribute information about changes in order state
  • event message – as above
  • wire tap – centralize diagnostics

Quite impressive, isn’t it?  After lunch we started playing with the Event Store. We managed to implement a simple chat application using Event Store’s ATOM APIs. Cool, but the rest was even better. We learnt how all the concepts we learnt during last two days can be easily implemented inside the Event Store using Projections. This stuff is really amazing and I say it despite my antipathy for JavaScript.


The cost of the training for early birds was just 300 euros. In exchange we got

  • three days of super high quality workshops with over 60 percent of coding time
  • gorgeus view of Wawel Castle during coffee breaks
  • three course lunches every day
  • after-party & beer

If this was not enough, Greg managed to give away three free tickets for CS students. Kudos man, you rock!

VN:F [1.9.22_1171]
Rating: 5.0/5 (6 votes cast)

IDDD Tour @ Kraków

I’d like to announce that May 6-9 Vaughn Vernon is coming to Kraków to teach his Implementing Domain Driven Design course (based on the book with same title). The course is 4 day long (as you can easily guess from dates;) ). Expected lecture to coding ratio is around 50%. The total price for early bird registration is 400 EUR.

You can register via Eventbrite.

See you there!

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Services & Bounded Contexts


There is a common misconception that a Service in a Service Oriented Architecture is equal to a Bounded Context in Domain-Driven Design. It is not.


Bounded Contexts govern the way we speak about things and how we model them. They are about the language and meaning of words (e.g. what account means within the bounded context of a banking versus what it means within the bounded context of e-mail).  In other words, bounded contexts are used to break down the problem domain.

Services on the other hand structure the way we implement the models. They are used to break down the solution domain. They should adhere to The Four Tenets of Service Orientation

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is based on policy


Let’s examine a distributed application server management tool as an example of a simple SOA. There are only two services: a Node service which is installed on each machine and acts an agent and a Supervisor service which controls and coordinates all nodes. There are also two bounded contexts: a context of Node and a context of the whole Environment.

An example of the term which is used in both contexts but has different meaning is the verb to deploy. Within the Node BC, to deploy means to install a Windows Service for particular component on current machine while within the Environment BC it means to deploy and start all service components on all nodes according to the plan.

The Environment BC is implemented only in Supervisor service but the Node BC is implemented in both services. In the Supervisor service there is a Node aggregate that represents and single machine. It has a method Deploy that implements exactly the semantics of deploy as meant by Node BC — it uses REST API to ask the Node service to deploy a component. The rest of the Supervisor service implements the Environment BC.


Throughout the post I used a term to implement a bounded context. Wherever I used it, I meant that the code in the service implements the model that was created within the bounded context.

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)