There is no top
March 20, 2005 | co.mments
Stefan nails it on top-down versus bottom-up approaches to SOA:
"Easy enabling of your existing systems to allow them to play a role in your SOA may be a risk. But ignoring your systems sounds like the single worst strategic mistake you can make."
He's referring the John Crupi's  observation that bottom up SOA design is recipe for failure:
"It means that for SOA to be successful, it must be a 'top-down' approach. And top-down, means problem to architecture to solution. It does not mean, working from what we have and just wrapping it with new technologies just because we can. This bottom-up approach is quite natural and easy and is the perfect recipe for a SOA failure." - John Crupi
If John Crupi is trying to in part to say that taking an existing system (like a J2EE or .NET install) and waving some tool magic over it to expose your domain objects in terms of WS technologies, I would agree with that (much more work is needed as to how you expose an existing domain as services). I would also agree that the business needs to dictate its information and communication needs from IT and for too many years the tail has wagged the dog in that regard. However the statement he's made is somewhat broader - top down or fail. I was pretty surprised when I read it.
My tendency would be to believe the probability of failure is higher with a big, top down approach that has the ambition of spanning an enterprise. There are a few reference models and architectures for SOA available now and some of those are somewhat onerous, somewhat big. They involve the entire enterprise falling into line. I think it will be hard for any CIO or inward facing CTO to get on board with that approach, after the years of ERP and EAI hit and miss project failures and general IT overspend of the at the end of the 20th Century.
The difficulty with a solely top-down approach is that there is no top. SOA systems in reality tend to be decentralised - there's no one point of architectural leverage or governance, no one person who's going to be able to say and then enforce "a decision in ten minutes or the next one is free". Top-down approaches are necessarily centralised, and need to make assumptions about being able to coordinate activity amongst stakeholders which dents their usefulness in real environments. In the field, you'll will be communicating and working with partners with whom you no have no real leverage over in terms of specifying compatible technology or even processes. In some cases you won't just have no control over your callers, you'll have no idea who your callers actually are. Typically, partners will do the minimum amount of work necessary to communicate electronically (there are occasional exceptions to this, such as buy-side consortia). We have a good bit of experience of dealing with decentralised activities in Propylon and what strikes me as an approach that works in that context involves three things - focusing on finding the minimal necessary agreements, transforming data, and connector semantics. I see other efforts in the industry to be along the same lines but with different emphasis - focus on maximising the scope of agreements, concentrate on shared data semantics, ignore connector semantics . Whether these are correlations or causes, I couldn't say, but they are worth keeping in mind.
The goal for any enterprise should be to wean off building big centralised systems and focus on how to network smaller more adaptable ones together . It would help greatly if classically trained architects stopped killing good ideas for small and nimble systems by looking at them and saying "that'll never X", where X is some ility that may or may not matter in an operational context. And even if will matter, it might not matter yet, at least not so as to justify shouting down the approach. The reason in the past we needed to worry that 'it will never X' is because there is an assumption built into the procurement, requirements and development phases that systems of this kind get finished and that's it - no more money for active development. You have one shot to get it right. It turns out that 'one shot' is a myth and a dangerous one at that - business systems are never really finished, but thinking that they are radically affects thereafter how budget is dispensed and for what.
An important take-way from SOA thinking is related to the software lifecycle - that we're building Services, not Products. Services are ongoing entities that require continuous attention. People don't always appreciate why the likes of Google run extended betas, but if you spoke to an engineer there or in Amazon and asked them when they would be finished, well that might be a strange question. 'Finished' is an odd way of thinking about a service - they're either online or they're not. But in the enterprise developers building out services are asked every day when the service will be 'finished' - even though it's an equally odd idea there.
SOA is the first time the industry has taken the reality of ongoing systems maintenance and requirements variability and folded them into systems architecture. This is a critical step forward because the bulk of system cost is not found in the initial development. The other operational outcome of SOA (and WS) is that you don't really integrate until you go live - you can work in a staging environment to emulate the live one as much as you like, but it's just not the same thing because you won't have those actually calling your services to hand in staging. Staging is limited and building a parallel staging universe isn't tenable. The thing to do is to plan to fold development and testing into the live system (along with the staging phase), and be ready to work directly on live servers. I will concede that working on live servers is an idea that could make people deeply uncomfortable. It's not impossible to do, and is one reason why organisations that have embraced the Service or Web2.0 model like Technorati and Google run extended beta programs. The penny for me dropped when I read Steve Loughran's paper where he discusses extending the RUP to cater for deployment and operations use-cases - Steve's paper is also the first time I saw the notions of actually integrating on the live servers and letting developers access those servers in print.
In short: services don't get finished, they become available, and then need to stay available.
The next innovation needed in IT is not for new technology stacks, or new architectures, or even making techies and suits communicate. It's to find business models which support ongoing incremental development and deployment approaches implied by Service Oriented and Web2.0 approaches. To build better information systems, we need better financial models. The current fixed-price and time-and-materials poles apportion risk poorly. As a result they force parties into contract-heavy engagements, where sign-off to build can't occur before all the the requirements are known in advance. They result in high-ceremony processes that can trace all the requirements no matter how trivial and provide accountability no matter how incidental. Wanting to know the requirements is of course important, but insisting to know all of them is unrealistic, and not a workable approach anymore. There needs to be a means to support variability in projects - as Mary Poppendieck put it, every late requirement represents a stategic opportunity for the business if it can be delivered. Granted, the suggestion that the business models need an upgrade may sound unrealistic, but there's not much point talking about software as services or process agility in the context of rigid commercial frameworks.
Stefan's closing advice is this:
"From my experience, the best way to approach this is with a mix of high-level vision, introduced top-down, and bottom-up, quick win-scenarios that sort of grow toward each other: 1. Set up some initial guidance and high-level strategy, spending no more than a week on it. 2. Solve the next few small integration or B2B connection problems using 'SOA technology'. 3. Revise your high-level strategy to reflect what you've leaned. 4. Rinse and repeat."
My experience concurs with this. When the business people and the architects are thrashing out the documents and processes (which can take years or simply be an ongoing conversation that produces a stream of requirements), you cannot have the programmers sitting on their hands - they need to build something and show it to the business, so the business can ratify and refine what it's asking for. A culture of continuous deployment and tending to front line services will also help the organisation infrastructure become robust to continuous requirements changes.
 Subscribed. I'm delighted this man is weblogging. The J2EE patterns book is wonderful.
 Probably true for any software design approach, not just SOA. Grady Booch made the observation for about big systems being built from smaller working systems many years ago.
 The term for this in the Web Services world is 'protocol neutrality', but you also see in architectures like JXTA or FIPA.
 Decentralisation will come to be the norm for business systems over time which is one reason why developing an understanding of how Internet protocols and formats work is becoming a core skill, just as distribution has been a core skill in the past. Understanding Internet architectures and protocols is particularly valuable for appreciating minimal agreements and the importance of connectors.
March 20, 2005 11:08 AM