Archive for July, 2007
I’m home sick today, watching old PVR’d stuff. I watched a Nova episode on solar power I recorded months ago, which prompted me to get on the ‘net and start investigating solar credits programs in Florida, which prompted me to look at my dashboard widget for stocks and see that AAPL is getting hammered today.
No big surprises here; Apple is announcing numbers this week, and AAPL is about as volatile as a stock can get based upon “news” (they usually dip when all the panty-waists run for cover prior to a quarterly earnings report). The financial analysts also continue to have their heads up their asses, a condition I endured for years as an Apple investor during the iPod frenzy, when nobody seemed to understand the obvious long-term implications of the halo effect, and consistent doubling or tripling iPod sales wasn’t good enough for the guys at the big brokerages.
So, what misinformed ruminations are the analysts interpreting off of the inside of their anal passages this week? None other than “disappointing” iPhone activation numbers for AT&T during the first weekend of availability indicating lower than expected unit sales, and perceived slowing interest in the iPhone at Apple Stores.
Well, I think this is just garbage. I’m going to go on record stating that I think the lack of activations is based mainly upon low estimates produced for the number of customers experiencing issues activating during the first weekend. And as far as interest being low in Apple stores, I think most people would rather go to an AT&T store (of which there are tons) than an Apple Store (of which there are a handful) to buy a cell phone. I’ll find almost any reason to go to an Apple Store, even if I don’t want to buy something, and I’d go to an AT&T store way before an Apple Store to buy an iPhone. I just have a natural expectation that a cell phone store would be able to help me out better with the purchase than a computer/gadget store would be.
Thank God Piper Jaffrey still has the sense to call it like it probably is. They seem to be the only group of analysts that understands Apple as a company and knows what the hell they are talking about.
I’m guessing that the rest of the analysts are just manipulating the market as usual so they can buy a crap load of shares before AAPL hits $200. After all, back in the iPod frenzy, I picked up most of my AAPL stock at around $30 while the detractors rambled on; and I’m still long on AAPL at $134 today.
Okay, so it doesn’t look exactly like the bike from Akira, but pretty darn close. Unfortunately, my girlfriend thinks there are more important things I should be dropping $15,000 on in the next twleve months than another vehicle that I don’t really need…
As I usually do when I read an interesting article that I like, I check out the author’s blog, and then look to see if they’ve written anything more substantial (such as a book).
I just found an interesting article on SOA and BI that is quite relevant to the state of our architectural plan (considering that we’re mere months way from getting some messaging and BI solutions rolling at CFI). After finding the author’s blog, I found out that he’s writing a book for Manning on SOA Patterns due in print in January 2008.
Manning’s MEAP program (i.e. early access) gives you the ability to look at the first chapter, and I must say I’m looking forward to the book if it continues in the vein of the first chapter.
“Apple makes a great operating system and programming environment based on OPENSTEP, which wouldn’t exist if NeXT computers hadn’t failed, which wouldn’t exist if Steve hadn’t been forced out of Apple, and which Apple wouldn’t have had to buy if they hadn’t lost the desktop wars due to their mismanagement and Microsoft’s awesome yet pretty much illegal business mojo.”
A few weeks ago, LeGros turned me on to Mule, which I had previously not heard about. Brian and I have both been reading the excellent book in the Martin Fowler Signature Series from Addison-Wesley, Enterprise Integration Patterns, and I imagine that Brian stumbled across Mule as a result of the patterns contained in this book.
So, what is Mule?
I’ve had a hard time describing it to people so far since it does a lot of things, but from playing around with it, reading articles, and watching video podcasts it seems that the best definition is that it’s a framework. What the Mule framework does is make it very easy to link together disparate services exposed over a variety of formats/protocols and enable them to communicate with each other either synchronously or asynchronously. Mule also handles routing of traffic between systems in the event of an error, failure, or custom condition. The nicest thing about Mule is that it is essentially an implementation of all the best practices and patterns decribed in the Enterprise Integration Patterns book, which makes it extremely easy for somebody new to the principles of ESBs and messaging to get something up and running.
ESBs and Why They’re Important
One of the architectural principles I set in motion at CFI was for us to build standalone, service-oriented APIs for individual systems. In each API, we have a top level service class that offers operations for other systems to call, and hide as much of the implementation of the services as is reasonable. At all sensible costs, we avoid tying our code to specific implementations of vendor tools. When we do decide to use specific tools, we provide reasonable amounts of abstraction to allow those tools to be switched out at a later time with ease. Naturally, we use the Spring framework to wire together the internals of our APIs, but this is also hidden from the outside world.
This is all well and good, and to date we have had tons of success with this approach. However, I was always uncomfortable with where we would go from there. In order to properly make use of many disconnected service APIs, you have to either stack them on top of each other or couple them in some fashion.
For example, we are going to rewrite our sales processing system soon. The sales process is essentially an orchestration of a number of services: Inventory, Customer Management, Addressing, Accommodation Services, and many others.
Looking at Addressing for a second, we want to build a standalone system to manage US and international addresses (so we can solve that problem and be done with it), but I know that there will be system-specific additions to the Addressing system based upon the business rules for the businesses those systems support.
So, what are our options for handling this sort of use case?
1) Create a new API that extends the Addressing API
2) Integrate the Addressing API with another system with specific functionality for the Sales system
The first option is dangerous, since it requires us to form a tight dependency between the basic Addressing system and the extended version. Changes to the lower level system will cascade to the higher level system, and potentially vice versa.
The second option sounds cleaner in a sense, but still poses problems. How do we integrate one system with another without introducing couplings at the service level? To clarify, we would have code in the Sales-specific system calling the service layer in the general Addressing API. Thus, any changes to the Addressing API would require recompliation and/or reprogramming of the higher level system.
This simple example extends to every system involved in the service architecture. Even if no Sales-specific addressing functionality was required, the Sales system would be directly coupled to the service APIs for the Inventory, Addressing, and Accommodation service layers. Additionally, so would every other system that reused these service APIs. Once again, as changes are introduced to the APIs, they would cascade through the system causing potential pain points for every consumer using the service.
Abstraction Through Delegates as a Possible Solution
My gut-level reaction to this scenario was to put an object in between every system and the service layer(s) that they worked with as a buffer to change – something we dubbed the Delegate. The result would be that if changes occurred to the services consumed by the calling application, the Delegate could absorb them and handle translation.
However, I still wasn’t entirely comfortable with this solution. Unfortunately, this solution still means that every Delegate for every system would need to be touched in the event of change to a service API. While the transformation process would potentially be simplified by the abstraction layer of the Delegate, it would still require a lot of rework.
Another potential issue is integration with systems written in different technologies. In order for us to work with a service written in .NET, or a legacy system with no API, we would have to write Java APIs to link in to these services, and expose them to the application through libraries on the classpath. This seemed a little tedious, especially when many commercial .NET applications come pre-bundled with web service APIs. The advantage of having a custom Java API wrapping a vendor’s product is that we could code the API in to a domain-specific service that more closely met the needs of our business – but it still means having to write a wrapper each time we need to integrate a new vendor’s products.
At the time, I had no better solution in mind. Ultimately, you always end up with some coupling in systems, and I felt that we had minimized it as much as possible. The key is to keep couplings light and easy to change. I felt that we had achieved some ease of change with the Delegate solution, and that perhaps this was as much abstraction and loose couping as we could manage in our architecture. It was still a lot better than what we had in our present enterprise systems, and was a compromise I was willing to see play out over time as we built up more experience with the service development process. After all, we could always change our strategy later.
Why Not Web Services?
It wasn’t that I hadn’t heard of Enterprise Service Busses and web services yet; they just seemed like overkill for our needs.
Most of the ESB stuff I had read so far revolved around web services. I had always had a loathing for web services, since it seemed to me that in an entirely Java-centric environment such as ours, adding a web service on top of every service API would mean a whole extra set of overhead for converting objects to XML, making SOAP calls, converting XML back to objects, and finally making a call to the service. In my opinion, we might as well just have each service invoke the other in-process through stateless services available in libraries on the classpath.
My goal to avoid web service overhead was that we would expose all the available service API libraries to every application by bundling the libraries in to a common set of JARs, which we referred to as “CommonLib”. Any application could make a direct call to a service in an API on the classpath, in the same way that a set of distributed applications can interact over web services exposed on the network.
Unfortunately, having a bunch of common libraries on the classpath means that every application has to use the same version of popular open source libraries, such as JDOM and the Apache Commons libraries. Initially, I didn’t see any issue with this since we were in complete control of our environment for new Java projects.
However, I failed to take in to account the complexities introduced by deploying our code to the Flex and ColdFusion servers, which also implement common open source libraries. We immediately started running in to issues and incompatibilities with differing versions of these common libraries, and it was apparent that this strategy was going to be more of a headache than we had expected.
EIP and ESB to the Rescue
When I started reading the Enterprise Integration Patterns book, Brian introduced me more formally to the notion of the ESB in the way that the EIP book sees it: a set of services loosely coupled and connected to a bus, via which messages (both synchronous and asynchronous) can be sent between interested applications.
With the original strategy for in-process communication, I had imagined that we would end up with a core set of common, genericized Java objects that could be used to send information from one system to another. We had not gotten far enough in to exploring the object-oriented view of our domain to clarify these objects at the time that Brian and I discussed ESBs, but the time was fast approaching. However, I was still somewhat uncomfortable with the notion of these common Java objects, since they further enforced the coupling between the Delegates and the service APIs I described earlier. But with no other strategy yet in mind, I decided we would try it out and evaluate it in the first use.
This is where the notion of an ESB and the EIP book solved the problem.
The primary item I was unfamiliar with that drove home the sense behind the ESB was the notion of a Canonical Data Model. Essentially, the Canonical Data Model enforces the notion that messages sent between one system and another adhere to a common data format. This would mean that we would agree upon a set of XML messages that could be sent between systems. The beauty of this approach is that the XML messages can be sent across any number of transports (including robust messaging systems implementing a technology such as JMS), so we didn’t necessarily need the overhead of web services and SOAP. Since practically any technology (including our legacy Oracle applications) can consume XML, this meant that we could immediately plug standalone systems from other vendors (or produced by ourselves) in to the bus, and get instant results.
So, what about translating message formats? Obviously, not all systems from varying sources will interoperate together seamlessly out of the box; they will expect different inputs and outputs. Well, it’s very easy (and relatively performant) to translate XML from one format to another using XSL templates, and the tools for doing so are incredibly simple for a programmer to leverage. So, while we maintain a little extra overhead by needing to convert data to and from XML, we lose the necessity to write a great deal of code to make it happen, which is a plus in terms of saving integration time.
Then there is the challenge of changing data formats. Over time, surely we would change the format of our mesages in the Canonical Data Model, wouldn’t we? And wouldn’t this cause us to end up having to rewrite the interfaces between the services to accept the new message formats? As it turns out, there are three patterns that neatly solve this problem: Format Indicator, Normalizer and Content Enricher.
Essentially, when we put together our first stab at our Canonical Data Model, all the XML messages that we send out will be “Version 1.0.” As we add systems to the message bus, it is guaranteed that the new systems will eventually need more (or different) information than the original messages hold. By simply introducing Content Enrichers to grab the extra data, we can add the information needed and upgrade the version of that message to “2.0″ for the entire bus. Alternatively, we can add a Content Enricher at the last minute for a specific service, and gather a service-specific combination of data that no other service on the bus cares about. Either way, we have a lot of integration flexibility with very little effort.
Let’s come back to the example where we introduce a 2.0 version for an existing message. We have a number of old systems producing and consuming messages in the 1.0 format, and now a new system is being introducted which requires additional (or different) information in the new 2.0 format. We write an enricher/transformer to translate the 1.0 message to the 2.0 format for the new system, and leave the rest of the system unchanged.
In practice, this would mean that every old system attached to the bus would continue to spout messages in the 1.0 format. New systems consuming the 2.0 format would be connected to the ESB, and the ESB would simply receive messages from the old systems in the 1.0 format and would run them through the transformation process before handing them off to the new systems. This would mean that the new systems would just start receiving version 2.0 mesages and would never know the difference.
Obviously, this strategy scales well for any changes to the Canonical Data Model well in to the future, and for any system that we may introduce regardless of their data expectations. It’s as simple an obvious as using adapter plugs when connecting electronic equipment, or transformers that adjust voltages so that an appliance expecting 240V receives it from a 120V outlet.
That Sounds Like a Lot Of Work
It’s actually not. Let’s walk through the 1.0 format to 2.0 format example. All we’d need to do is write a simple transformer/content enricher to convert the 1.0 message to the 2.0 message, and vice versa. With this step complete, the transformer component is free to be used as an adapter anywhere in the ESB, just like I can reuse the same model of electrical transformer to convert voltages for any number of appliances.
In an environment like Mule where the message routing and transformation is configured declaratively, all we need to do is modify the input points to the systems requiring the new message format to have the transformer intercept the 1.0-format message before it arrives at the system expecting the 2.0-format message. In any future systems that we build, we can reuse this transformer over and over again, and all we had to do is write the transformer class once and add a little configuration to an XML file when adding systems to the ESB.
We can even go further an add an endpoint from a legacy system that only produces messages in the 2.0 format. The old channel that all the system’s legacy peers are connected to would continue to produce 1.0 format messages, and the new systems could connect to the channel producing messages in the 2.0 format. You can see this example extending out over many years, where as new message formats appear, new channels appear, and ultimately you have a stable legacy system continuing to do what it has always done, and an ESB adapter producing messages in an infinite number of formats for an infinite number of loosely coupled consumers.
Another benefit we get from an ESB is total technology and version independence. If I have a ColdFusion server exposing the JDK 1.4, and I need it to talk to a service using JDK 1.5, I can simply bundle the service up and expose it using Mule to ColdFusion using REST architecture or a web service wrapper. If I have other services that run in JDK 1.5, Mule can expose them to each other in-process using a virtual machine transport, allowing me to leverage the benefits of in-process calls that I was trying to get with our original architectural plan. And if one day, the two services communicating in-process need to start using different message formats, I can add a simple translator and a few lines of config code to Mule and the services will be blissfully unaware that anything has changed.
As you can see, there are a lot of benefits and flexibilities introduced by an ESB – especially when paired with Mule.
Mule Overview and Examples
I’d like to discuss my experiences with Mule in my blog over the coming weeks, posting some sample code, and sharing thoughts on how you can get up to speed with Mule quickly. To save rehashing a lot of information that exists on the Mule web site, I’m going to defer you to their Architecture Guide, which is pretty well written and refers to the patterns you’ll need to be familiar with in order to understand the challenges and solutions of enterprise integration.
Mule seems to be growing fast, but you won’t find any books on it on Amazon yet; it’s just too soon. While there has obviously been a lot of effort put in to the User Guide, I struggled with a number of specifics and had a hard time finding information. This is the driving force that compelled me to blog a little about my experiences, since I think the framework is very cool and I’d like to see others play with it and adopt it.
Didn’t You Say Mule was “Spring for Enterprise Integration?”
Yes, I did. The reason I say this is that like Spring, Mule emerged from Ross Mason’s frustration with boilerplate in the integration solutions provided with Java, the same way that Spring emerged from Rod Johnson’s frustration with the boilerplate in EJB and J2EE. Also like Spring, Ross created Mule to be configured simply and quickly using short XML files, made sure to cut out any unnecessary work by implementing auto-configuration and convention-based approaches, and provided interfaces for simply and easily enhancing the framework so that you can bend it to your will for the partcular problem that you are solving.
Unfortunately, unlike Spring, Mule has not reached de facto status yet, but I imagine that it will in the next year or two. With practically everybody in the Java community using Spring, the problem of exposing standalone, application-server-independent applications to an infrastructure will be popping up all over the place. EJB took care of this for you in its own way, but standalone Spring applications need a simple and easy way to be exposed to each other, and for routings between systems to be lightweight, loosely coupled, and easy to manage. Since Ross put the same smart design choices behind Mule as Rod did with Spring, and makes the software available for free with commercial backing from MuleSource the same way that Spring is backed by Interface21, I imagine that Mule will be quickly adopted by the community and will soon achieve the same status that Spring has now.
I’d also recommend watching this excellent video of Ross Mason describing Mule and its role in ESBs.
Most Popular Yelling
- Scrolling Large Data Sets in Flex Charts (41)
- Configuring Tomcat SSL Client/Server Authentication (28)
- Fixing "Bluetooth audio failed" Error Message on Mac OS X with Sony DR-BT50 Headphones (16)
- How To Become A Software Engineer/Programmer (15)
- Using Axis's wsdl2java in a Maven Build (13)
- Speak and Spell Samples (13)
- An Objective-C Tutorial for Enterprise Java Programmers (12)
- On A Personal Note (10)
- Abandoning ColdFusion? (9)
- Adobe Says: "Thousands of Developers are using CF 8" (9)
Stuff I Like
- October 2011
- August 2011
- April 2011
- March 2011
- January 2011
- December 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- January 2007
- December 2006
- August 2006
- July 2006
- June 2006
- April 2006
- February 2006
- December 2005
- November 2005
- October 2005
- August 2005
- July 2005
- June 2005