« Groovy: the more the merrier | Main | Introduction to Java code generation »


This entry, AnemicDomainModel, has caused some amount of fuss. I think Martin's saying that there isn't much point in having an object domain model without interlacing that model with behaviour - I agree with this as long as the behaviour is relevant for the domain. For non-relevant or system level behaviour we have patterns such as Visitor, DTO, FrontController, and of course ServiceLayer. AnemicDomainModel has been interpreted as a slight against SOA and Webservices style integrations, but I don't think the criticism applies or is even meant to (modulo EIP I haven't heard much from the Thoughtworks crew on SOA or Webservices, but am looking forward to it). Objects and Services are ideally working at very different architectural scales. Objects we can characterize as suitable for intra-domain work and services as suitable for inter-domain work. With the commercial state of the art today, nobody should be still sending object references or doing things that require reliable connections across unreliable high-latency networks. Contrariwise, asking POJOs or .NET Assemblies running close by to gateway through HTTP doesn't seem to make much sense either. Martin has also said in the past that we should avoid object distribution for its own (or the vendors) sake, something I agree with. I think the point where you need to think about distribution is also an inflection point for thinking about an alternative model - LAN wide distributions can look at message queuing rather than distributing objects and Internet scale integrations can look at application protocols like HTTP. Here's a guide from 10,000ft, based on your network topology:

  • Standalone, cluster - scripts, pipelines, object models
  • LAN, Intranet - object models, messaging, application protocols
  • WAN, Internet - application protocols, service models

I deliberately didn't number that list because I don't want to imply an order or any level of importance to the models, and I want to make it clear that they are a spectrum which bleed into each other. Breakdown by network topology is pretty arbitrary. Others may see administrative and ownership topology as being more critical - still more may prefer a breakdown based on how we manage application state. And the most confusing area is the LAN/intranet space, where theoretically anything from transaction scripts to objects to messaging to services could be applied. It's the scale where versioning issues become apparent as well as published versus public interfaces - if you are hitting these problems you might be hitting the limitations of your model. To compound things, it happens to be the scale where many, perhaps most of us are working (at Propylon we tend to work with customers at the Internet/WAN end of the scale).

Unless you work at the edges, it requires skill, judgement, luck, even letting go of some prejudices and past learning to determine the appropriate scale to work at - feel free to disagree, but I think there are no ScaleFreeModels.

January 10, 2004 06:15 PM


Trackback Pings

TrackBack URL for this entry: