Lessons learned with AxonFramework

Advice from a team lead having integrated AxonFramework in half a dozen applications over the past 3 years

Benoît Liessens
5 min readDec 27, 2018

It must have been about three to four years ago that I first came across AxonFramework. I head read about DDD and CQRS before but never saw any fully fledged Java framework until then. Fortunately the project had some documentation and a Google group so I took a deep dive …

At that time I was architect on a healthcare project — I still am today BTW — and we where about to kickstart a new greenfield application to handle the user accounts of our main customer facing web application. Having acquired enough understanding of DDD, CQRS and the AxonFramework I decided to use it for this new application.

Fast forward, December 2018

That Axon based application has been in production for quite a while now without major issues — at least not when it comes to the Axon framework. In preparation of the new features the team will be developing it’s a good time to upgrade Axon 2.4 to version 3.x. While doing so I came across a couple of decisions & suggestions I made 3 years ago — when I was still an Axon novice — that are worth a warning.

No 1: Where to validate commands

I recall having advised to implement the @CommandHandlers outside the aggregate, in a separate Spring component. Most of the commands in that application contain entity references for which validation (of that reference) requires a remote call. I didn’t want to clutter the aggregate class with that and it felt more natural to handle these reference checks in an application layer, before invoking the actual aggregate instance. As a consequence we ended up having to load the aggregate instances from the Repository ourselves and invoke methods on the aggregate to apply() the events.

This approach resulted in pretty convoluted code. Command validation was done in the application layer as well as in the aggregate. There were no guidelines as to what validation ought to be done where.
This approach also forced us to leak aggregate state: most commonly the CommandHandler needs access to aggregate state for validation. With validation happening outside the aggregate, state had to get out of the aggregate too.

No doubt about it: this was not a good idea!

The better way is to have Axon load the aggregate instance for you and autowire anything needed for validation through the command handler method.

@Aggregate
class Bakery {
private Stock stock; @CommandHandler
public void handle(OrderFlourCommand cmd,
@Autowired BakerySupplier supplier) {
if (stock.belowThreshold()) {
supplier.orderFlour();
apply(new FlourOrderedEvent());
}
}
}

Note that the @Autowired annotation is not required. We added it just for readability; to remind us that parameter is resolved to a Spring bean.
I remember having struggled to properly configure the Axon infrastructure (SpringBeanParameterResolverFactory) needed for this. Such parameter injection is also supported with @CommandHandler annotated aggregate constructors. The AggregateTestFixture also supports this type of parameter injection. Just register the BakerySupplier as injectable resource in the test fixture.

No. 2: Guard your (stored) events

I’m sure every CQRS article will tell you that: events are the primary storage mechanism. For as long as your application will be used, you must be able to read any historic event from the event store. Changing the name or structure of an event class should never break compatibility with previously stored events. Be vigilant!

With that in mind it’s valuable to carefully consider which event serialisation technique to use. Axon supports both XML (XStreamSerializer) and JSON (JacksonSerializer).
Whichever you chose take some time to write thorough test cases to guarantee your events remain backwards compatible. You wouldn’t want to break (de)serialization by moving the events to another package or renaming an event’s private field. You might consider to include the Metadata in these tests too.

XStream for example will — by default — use the fully qualified class name of the event as root tag of the generated XML fragment.

Given class

package com.backery;public class FlourOrderedEvent {
// omitted fields
}

XStream will produce

<com.backery.FlourOrderedEvent>
<!-- omitted fields -->
</com.backery.FlourOrderedEvent>

The implication is that the package where your events classes are defined leaks into your storage — eternally. Is that a problem? Well, not really until you move the events to package com.backery.api. That’s when you will have to deal with the consequences:

  1. How to deserialize legacy tag <com.backery.FlourOrderedEvent> to event com.backery.api.FlourOrderedEvent?
  2. And how to serialize a new FlourOrderedEvent: Will you use the legacy XML tag as previously (and add technical dept) or use a new tag <com.backery.api.FlourOrderedEvent> (and add technical dept)?

These aspects need to be dealt with early on. Preferably before going to an acceptance environment. Tune XStream til it fits your needs!
Already using JSON as storage format? Nice! And have you thought about decoupling your event’s field names from the storage format too? You should.
Use the annotations corresponding to your serializer to decouple field names from serialized the xml tags or json keys:

public final class FlourOrderedEvent {
@JsonProperty("id")
private final BakeryIdentifier bakeryId;
@JsonProperty("qty")
private final int quantity;
@JsonCreator
public FlourOrderedEvent(@JsonProperty("id") BakeryIdentifier id,
@JsonProperty("qty") int qty) {
this.bakeryId = id;
this.quantity = qty;
}
// omitted getters
}

With this in place it’s safe to rename fieldBakeryIdentifier bakeryId to bakery. This will not impact the JSON (de)serialization. XStream has equivalent annotations.

No. 3: Keep your API clean

As you probably read in several other DDD books or articles, your events represent whatever occurred in your business. The event (class) names should be meaningful to anyone in the business. Normally they pretty well depict the business process your company operates. These events are you API. Hence, any other type used in these events effectively belongs to the API too.
I favour rich types; types that better convey what they represent. Instead of using a UUID as (aggregate) identifier type I prefer to wrap that UUID with a more meaningful type such as BakeryIdentifier for example. This is even more true when dealing with several aggregates in a single application.

The same can be applied to values in your events. Likely some String, Integer or other primitive typed field in the events deserves a proper type. In real life we don’t reason with String or Integers, so why accept them in your application’s API?

These types belong to the API just like the events themselves. As such, make sure to keep them lightweight. You shouldn’t need a ton of dependencies to compile them. For example, don’t add JPA annotations to these types for sake of reusing them your JPA-based read models. It despises your API and such rich types aren’t really helpful to write nice(r) JPA-queries. Especially not with Spring-JPA.

public final class FlourOrderedEvent {
@JsonProperty("id")
private final BakeryIdentifier bakeryId;
@JsonProperty("qty")
private final int quantity;
@JSonProperty("unit")
private final Unit unit;
@JsonCreator
public FlourOrderedEvent(@JsonProperty("id") BakeryIdentifier id,
@JsonProperty("qty") int qty,
@JsonProperty("unit") Unit u) {
this.bakeryId = id;
this.quantity = qty;
this.unit = u;
}
// omitted getters
}

In above sample class BakeryIdentifier and Unit belong to the API. The only dependencies are the JRE and Jackson annotation library (Maven coordinatescom.fasterxml.jackson.core:jackson-annotation:${version}).

That all for now folks, hope you enjoyed the read. Next episode I’ll discuss another command validation anti-pattern.

Regards,

Benoît

--

--