Domain-Driven Design by the book
Starting a new project is fun. You (assuming you, reader, are The Architect) can make all the decision and do it right. You can use all the newest technologies you can find. And what do you do when it comes to designing business logic? Hopefully you base you decision on the level of complexity of the problem you are solving. If the problem is relatively easy, you implement a CRUD solution, right? On the other hand, if it looks really complex, you can enjoy having a good reason for CQRS architecture. But what’s between these two extremes? Enter (again) classic Domain-Driven Design by the book. I understand that in this CQRS noise you could forgot about this option. Let me remind you about architectural properties of such design.
10.000 feet view
The best way to describe such system is to use Jeffrey Palermo’s Onion Architecture approach. Domain model is the most inner layer of the onion. It contains your business logic and does not depend on anything significant (like WCF or NHibernate). The next layer of the onion would be something I usually call Application Layer. Notice that I skipped Domain Services Layer. It was intentionally. I consider domain services (if you really, really need to have them) part of the domain model. Application Layer is where your commands and command handlers live. For each business action there is a pair of a command and it’s handler. Trust me, making business operations explicit by implementing them as classes (as opposed to methods on a class/interface) pays off in the long run. If the application has a user interface, it is the outermost layer of the onion. If not, the Application Layer becomes the outermost one with it’s commands exposed via some remote interface (like WCF or NServiceBus).
Repositories and data access
You probably know I am a fan of repositories. I like the explicitness of their interface. In the architecture we are talking about you would probably use some kind of ORM (like my favourite NHibernate) so repositories would end up being a thin wrapper around the almighty mapper library. Using an ORM also implies that you have a Unit of Work in your solution. Most (probably all) of the major mappers have it implemented so it does not make any sense to not use it or write your own implementation.
The corollary of having a unit of work is the possibility of violating the aggregate boundaries. It needs discipline (and you, as The Architect, are the one who is responsible for maintaining it) to avoid transactions spanning multiple aggregates. Why is it so important? Because Aggregate Roots are usually used as concurrency units. Involving two concurrency units in a transactions can lead to all sorts of nasty things, including race conditions and deadlocks. There is one exception to this One Transaction-One Aggregate rule, namely creating new aggregates. Because newly created aggregates, for obvious reasons, can’t be involved in two transactions (thus causing concurrency problems), you can create them as part of a transaction initiated for another, already existing, aggregate.
Of course, you can read as much as you like, also from other aggregates.
There are several things you can test in such solution, each of which requires a different strategy. First of all you should test your domain model as it is your most valuable asset. You can read more about it in one of the previous posts. For testability reasons you should strive to squeeze as much business logic in Value Objects as you can. The remaining logic will be placed on the entities.
Another thing worth testing is persistence. You have to prove that you can persist and load objects in any valid state. Because it would be not practical to test for all the valid states, I would recommend testing for the most complex states. Odds are that simpler states will just work.
Application layer tests are also worth the effort. They come in two flavours: with mocked repositories and with real repositories. It depends on particular case which one is more useful. Testing with real repositories requires of course some database. The easiest ones to use in test environment are SQLite and SQL Server CE.
Last but not least, there is usually some UI code (like controllers) to be tested. Don’t forget about it.
Yes, the events have their place also in this architecture. We need them because they are the ultimate integration mechanism. You can do all sorts of cool things if your system publishes the changes to its internal state as a serious of events. If you don’t believe me, read or listen to Udi Dahan when he talks about SOA done right.
You can implement events in a classic DDD solution in two ways. Either make them a separate concept like in this article of Udi or make them part of your domain model (as proposed by Eric Evans in one of his great presentations). I consider the latter approach cleaner but more difficult to implement properly. The choice is yours.
Some systems really need a user interface. Really. From time to time it is good to display something to the user and let him make some decisions. It makes users feel important etc. So how do you get the data out of the store so that use can see it? As you may have read before on this blog, I am a big opponent of using repositories for getting the data to the views. This is a separate responsibility which I like to implement in Finder classes. Another issue is, how you implement these finders. As usual, you have more than one option. One is to have a set of classes parallel to your domain model that are mapped to the same tables in the database. Of course you would also need another set of ORM mappings. You can read more about this approach here. Another, also valid, approach would be to map domain model classes to DTOs using a tool such as AutoMapper. I hear you saying that mapping domain objects to DTOs is considered an anti pattern. I guess I have said it a few times. But what I am proposing here is a bit different. You should encapsulate the way you get your DTOs in the finders so that it’s not an architectural issue any more. If it’s not an architectural issue, it can’t be an anti-pattern, right? You may have some finders that issue a plain old SQL, some that map from domain objects and some that use database-mapped DTOs. Use whatever method works best to solve the problem at hand and don’t forget to encapsulate the solution.
I would like to hear from you what do you think about this architecture. For sure it’s not the ideal one, the one you dream about. But I’ve implemented it a few times and it worked reasonably well. Do you have any ideas how can I improve it without introducing additional complexity?