Webinar: Designing domain models with the business in mind

AARDLING_03_event

Learn how to design a model which is fit for purpose with our senior consultant Stijn Vannieuwenhuyse

Stijn Vannieuwenhuyse talks through how to design domain models with the business in mind.

To paraphrase Eric Evans, a domain model is:

  • a system of abstractions

  • that represent a selected aspect of a domain

  • and used for solving problems in that domain

To ensure the model brings value, all key business stakeholders need to understand and own the model. If it’s hidden behind the engineering door, teams lose some of its usefulness.

Therefore, the challenge for teams is to rectify this and put the model at the centre of how development and IT work together. Teams that achieve this will benefit from fewer misunderstandings reaching the code, less costly rework, and faster delivery times.

What you will learn from this video:

  • Why a model shouldn’t mimic reality

  • What belongs in a domain model (with real-world examples of what works)

  • How to incorporate behaviour into your model (not just data structures)

  • How to ensure that your model is understood by (and jointly owned by) the business

  • When to split a model (and why bounded contexts matter)

Who should watch?

This session is designed for people trying to improve the collaboration between business and development (e.g. architects, developers, technical leads, managers).

Session lead:

Stijn Vannieuwenhuyse is a senior consultant at Aardling. He helps clients across a range of industries, including healthcare, financial services, and automotive. He was previously Head of Engineering at Teamleader.

Edited transcript from the presentation

We all have mental models — and they're often different

Let me start with a small anecdote about my son. He's almost 10, and recently he started getting interested in following the news. A couple of weeks ago he was looking for a news programme in the TV app and couldn't find it. He asked where today's news was, and I told him it hadn't aired yet. I saw his brain short-circuit. In his world, watching TV means picking from a library — a catalogue you can always dip into. For me, growing up, television was linear. A timeline. We were talking about the same device, maybe even the same activity, using the same words — but we had completely different mental models.

That's not just because he's a child. It's how humans make sense of things. We build models in our heads all the time to reason about the world. And that's what I want to talk about today.

What is a model, really?

Take a metro map. Everyone can navigate a network using one, yet it's not an accurate representation of reality. The distances aren't right. The angles of the lines don't match where they actually run. The curves don't reflect reality. And yet it works — because it gives us a mental model. Lines, directions, colours, numbers, stops, layovers. We think in terms of how many stops and when to change lines.

The mental model we have is far more important than the representation in front of us. The drawing only expresses ideas. Without those underlying ideas, the drawing means nothing.

You can also see that the same mental model can be expressed in different ways for different purposes. There's a version of the map that shows just one line — the one you're on. There's a version that shows rerouting for maintenance. You can even explain how to navigate the network without any visual at all, just by talking about it.

So models are abstractions — mental constructions in our heads. They're not reflections of reality, and they're not tangible. Sometimes we express a model as something tangible, like a map, but the model itself remains conceptual. And a model is not a concrete solution. If you're hungry, you want food — but food is not the model.

A model only captures selected parts of reality. The parts useful for solving a specific problem. And if the problem changes enough, you may need a different model. Would maintenance workers use the same metro map as passengers? Would shift planners for train drivers use the same one?

What happens when people don't share the same model

If you're looking at a box to be shipped, one person is thinking about its dimensions and weight and which vehicle can carry it. Another is thinking about the contents — is it fragile, what's it worth. Two people, two different mental models, both entirely reasonable.

Scale that up to a group of people, and it becomes a serious problem. It's hard to understand each other, hard to get on the same wavelength. For me, the act of modeling is about expressing how we interpret our version of reality and getting aligned on a shared model — one that everyone contributes to, and that eventually shapes everyone's thinking in the same direction.

When alignment is compartmentalised — one group with one model, another group with a different one — you start experiencing friction. Someone thinks a change is small; the other group considers it a big impact. You end up in endless requirements meetings. IT delivers the wrong thing. Shadow systems appear in Excel. Architecture decays. It's not because people are stubborn. It's because they're working from different models.

The shared model: what makes it good

The solution is a single shared model. One of the most important messages I want to convey is that the model the development team expresses in their code needs to be shared with the business as well. Models are often created by the development team — but we need shared ownership. The business needs to validate the model, understand it, and recognise it as logical. There should be no layer of translation. That is one of the most important qualities of a good model.

So what else makes a model good? Let me walk through a few qualities.

A good model makes the problem manageable. Reality is a mess. The way we measure time is a good example. Years and days are based on Earth's rotation and orbit — but there's orbital drift. Our time model ignores that. We use an average day length and round the year down to 365 days. That's not how reality works. But for 1,460 days, we don't need to think about it — and on the 1,461st day, we patch it with a leap day. The goal is not a complete or precise model. It's a useful one. We reduce cognitive load by only keeping the details that actually help us reason.

A good model lets you do things you couldn't do before. Sometimes reality doesn't give us enough detail. If all we had were light and darkness and seasons, we couldn't say "meet me at 12:30." Hours and minutes weren't in reality — we introduced them as abstractions to solve a problem. But those abstractions need to make sense to the business too. You can't impose a technical solution and expect the business to understand and validate a model built on it.

A good model compresses meaning. Consider trying to describe a deadline as "before the third emergence of daylight following the first full moon after the blooming of the early spring white petal trees." Without proper names for concepts — days, weeks, months — describing any problem becomes impossibly verbose. A good model provides the vocabulary needed to describe problems and solutions concisely. There may be a learning curve the first time someone encounters the abstractions, but every conversation after that goes faster. We have to find the nouns that replace whole sentences.

A good model sharpens your thinking. It makes you ask questions about your assumptions. It creates predictability. It becomes self-explanatory. When the abstraction is right, you can reason about it, discuss it, and make decisions at a higher level. A good example: in payroll software for shift workers, if a shift crosses midnight, is that one workday or two? Does overtime apply if you work late? Our standard model of a "day" doesn't help here. But if we redefine the day as a "working day" within this model, those questions become answerable. New questions surface, and assumptions we didn't know we were making become visible. We start thinking at a different level.

What a model actually looks like — its dimensions

Many people mistake a representation of a model for the model itself. They get stuck thinking in data structures and diagrams. But the underlying mental model is the model. And that mental model has multiple dimensions.

There's structure: concepts, relationships, properties. There's behaviour: what does the model do? What business rules and logic does it need to include? There are the scenarios it needs to support — and the ones it explicitly doesn't. And there's its interface: how the outside world interacts with it, not in terms of user interface, but conceptually.

Take a shipping domain, where a routing service looks up the stops a shipment needs to make based on its origin, destination, and arrival time. We can extract nouns from that description: "itinerary" — a word already used in the business — and "route specification," an abstraction we introduce to make reasoning easier. But the verbs matter just as much. The routing service finds an itinerary for a shipment that satisfies its route specification. Behaviour is as much part of the model as structure. In my experience it's often more important, and often overlooked. The verbs also shape the external interface — how users or other models interact with this one.

We need to find the nouns that replace whole sentences, but we also need the verbs to connect those nouns. Once we have a good model, we can highlight different aspects of it with different representations — whatever helps convey the message for a given purpose. But one representation matters more than the others: the code. That's where the model is eventually executed. We should express the shared conceptual model in code as closely as possible. If we don't, we reintroduce the same friction as before, just at a different level.

The data model, by the way, is just another representation of the same model. It should support the thinking and the execution of the code — not drive it.

Real examples from practice

Let me make this concrete with a few examples from my work.

I'm working with a financial institution that gives B2B loans. To approve a loan, they go through several steps, each a checkpoint. If any checkpoint fails, the loan cannot be issued. A breakthrough came when we stopped treating a check as a binary thing — pass or fail — and modelled it as having three states: rejected, approved, and a risk requiring human investigation.

That third state changed everything. We realised that rejection shouldn't necessarily be a final state — it could be reconsidered if new information came to light. The yellow, investigative state was urgent and owned by the risk team. But once there was a rejection, the initiative shifted — it was no longer the risk team's job to chase it, but the account manager's or the customer's. Two different goals, two different forces, made explicit by introducing that third state. The model sharpened the thinking and exposed organisational dynamics that weren't visible before.

Another domain we use in training is a bike-sharing system. Someone adds a bike to a station, plans a ride, goes to the station, unlocks the bike, rides, and parks it at a different station. The key business rule: if you plan a ride, the bike must be available when you arrive. Visualising capacity at the station level helped us reason about this. A station starts empty, gets bikes added, someone reserves one, the bike becomes unavailable, the rider takes it and parks it elsewhere — and only when the ride is ended does it become available again. This also let us reason about edge cases: a maintenance worker reserving a bike so no one else can use it.

We explored two competing models. In one, you reserve a specific bike. In another, you reserve a slot at a station and the station decides which bike to hand out based on order of arrival, charge level, or damage reports. Visualising both let us reason about the trade-offs.

In a third example — again the financial institution — they offer credit lines to businesses and wanted to introduce surety bonds: a third company vouches for a primary borrower and agrees to cover repayment if needed. I started visualising the surety as a relationship — the relationship itself becomes the model. From there, the relevant dimensions emerged: it needs risk approval, both companies need to sign off, it needs to be time-limited and budget-limited, and we need to check that the two companies aren't connected to the same person trying to commit fraud. By visualising these things, the right conversations happened. We also derived the public interface: you can draft a surety, propose it to a counterparty, accept or reject it, have it expire.

When models get too big: splitting is the answer

In realistic situations, problem areas are large. And almost every domain has concepts that seem to attract everything. In a shipment domain, everything seems related to the shipment. These models become magnets — any new requirement gets added to them, and the model becomes unmanageable.

The solution is not more documentation. The solution is to split. You can split by lifecycle: a draft itinerary has different rules and different information needs than an ongoing one. You can split by business purpose: planning, audit picking, and tracking all involve a "shipment," but the details each needs are completely different. Smaller, more focused models that coexist and interact.

Netflix is a good example. Everything might seem related to a "show," but in practice the show concept looks very different depending on what you're doing. The catalogue needs a description and cover image. Recommendations need a different slice of information. Playback needs video stream, audio stream, your progress in the episode. Favourites are different again. All of these are related to the same underlying thing, but they are genuinely different models.

When you have multiple teams working on different areas, you want a shared model for each area of responsibility — multiple models that coexist, interact where needed, and are each manageable on their own.

Key takeaways

A model is a system of abstractions, not a diagram. It is a mental concept first. Everything else — maps, code, documentation — is just an expression of it.

Good models make problems manageable, let you do things you couldn't do before, compress meaning, and sharpen your thinking.

Modeling is about aligning mental models. It is not about producing specific diagrams.

And if you take only one thing away: a model is a thinking tool. Use it as that.