• How Much Architecture Is Too Much Architecture?

    My team has been evolving our system design and technology strategy over the last nine months. We’ve been going from CF/Fusebox 3 with CFCs to CF/Fusebox 4 with Java.

    Overall, I’m extremely happy with the decision, and my developers have echoed that sentiment. However, we’ve been having discussions recently about how much architecture we do prior to development, and how much detail we put in to that architecture.

    Here’s a run down of our development process once requirements gathering comes to an end. Our SDLC is based upon the Fusebox Lifecycle Process, or FLiP (more about this in my blog post on my FLiP presentation at CFUNITED 2005).

    1) An architect sits down and goes over the prototype and requirements document, and begins to explore their ideas for implementation.

    2) The architect decomposes the wireframe in Adalon (our Fusebox design tool) and creates a skeleton for the Controller in the Fusebox app. This is subjected to peer review by the team, during a presentation by the architect.

    3) The architect fleshes out the Fusebox skeleton with Fuses and FuseActions, while simultaneously developing the API for the Java delegate that the web site will use to interact with any APIs (system specific or third party) needed by the application. (We use the delegate object to hide the actual APIs being used, which allows us to switch APIs on the fly with minimal impact on the site itself). This is also briefly reviewed by peers.

    4) The architect develops the object model. Right now, we’re doing this by stubbing out the interfaces and concrete classes, and documenting their functionality and responsibilities in JavaDocs. Once again, peer review takes place when this is complete.

    5) Finally, the architecture documentation is delivered to the developers for implementation. They code off of the docs with speed and ease.

    If this sounds like a lot of work up front, it is – and with good reason. As a manager of a development team, I have to strike a careful balance between total chaos and over-documentation/over-engineering. Imagine these two scenarios, on polar opposites of the spectrum.

    I call the first scenario “Developer Code-Fest.” In this scenario, the development team is made up of top-flight developers, all hopped up on caffeine and Slashdot, and spewing catch phrases like “agile” and “XP” when they really mean “no documentation” and “by the seat of my Jolt-stained pants.” The business requirements are timidly slid under the door of this rabid geek party by the Project Manager. All hell breaks loose. As the sound of clacking keys reaches a crescendo, the developers froth at the mouth uncontrollably, with no thoughts of forward engineering having ever crossed their minds. Upon completion of their masterpiece, they ritualistically torch the requirements document with lighter fluid in a metal trash can, and throw the application over the wall to QA. When the project manager asks them where the technical documentation is, they go back and add just enough to satisfy “the pointy-haired bosses” – after all, they can’t remember what the hell they programmed, and if they needed to make changes, they’d just go back and read the code, right?

    I call the second scenario “Architect Yourself Into Oblivion.” In this scenario, the project arrives fully documented to the thoughtful and bespectacled architect, who regards them with the same level of interest as a biologist who has just discovered a new strain of llama. After three weeks of reviewing the requirements and quietly sipping chamomile tea, the architect decides that they need an additional six weeks to investigate the cleanest design pattern for implementing the “log out” process. A further fifteen business days are lost discussing the benefits of implementing persistence using strict ANSI-92 compliant SQL, as opposed to Oracle’s slightly specialized version, which (horror!) would never be portable to DB2. Eight months later, the architect is halfway through the design process and has written seven tomes of documentation, which, while resplendent in its undeniable design simplicity and elegance, can’t actually be implemented in any of the languages presently available to mankind. The project runs out of money, everybody gets fired, and the entire island of Puerto Rico is able to power itself for six months off of the BTUs generated by burning the architecture documentation.

    Of course, I slightly exaggerated both scenarios, but the point gets made: where do you draw the line between the no documentation/design scenario and over-architecture?

    I guess I’ll state the case another way to reduce the scope of that question. Here are the things that are important to me as a manager, with the reasons why.

    Documentation Before Coding
    I’ve never seen a developer do a decent job of documenting a system after it was programmed. The docs always assumed a level of knowledge of the code that you could only have had if (a) you programmed the thing in the first place, or (b) you read all the code. As a result, I would prefer that the system be documented ahead of time, and (preferably) actually implemented primarily by people who did not write the design documentation. The theory behind this is that if the docs don’t make sense by themselves, the system could never be implemented in the first place.

    Peer Review of Design Before Coding
    I want everything to be looked at by at least two people. Nobody is as good as everybody put together. I know this from experience, because my team has never left a peer review session without a cleaner design coming out of the conversation. (This also applies to post-implementation code review in equal measure.)

    Ease of Use, Ease of Change
    If my developers finally win the lottery pool, split the winnings, and move to Tahiti, I need sufficient documentation for my new development team to understand the old systems quickly at the usage level (i.e. make this work) and the implementation level (i.e. add this feature without breaking what’s already there). I also need the ability for different developers to work on each other’s code, without one person being the guru on specific pieces that they built; gurus become bottlenecks to their creations, and end up being bound to the system they built rather than evolving their skills with new systems. Finally, I need third party teams to be able to interact with our APIs without needing to understand the complex intricacies of the API’s innards (again, the usage docs satisfy this criteria).

    To date, I haven’t found a good way to meet these goals without doing a decent amount of up-front design work. On the flip side, most of our apps designed in this fashion have been great successes, and have been relatively well-designed and easy to maintain, so the payoff has been there. As for development timelines? They aren’t always the shortest, but they are far from unreasonable – and after all, the biggest overall system cost/wasted future development effort comes from bad design or lack of documentation, so ultimately you save time and money by doing more work on the front end.

    So, you might wonder why the hell I’m looking to change anything if what we have works. The answer is simple: if there is a shorter way to get the same results, I’d like to be using it. I’m a huge fan of process improvement, and finding new ways to do old things better. I guess what I would like to see is more agile approaches to our development process, more parallel architecture, and the same level of forward documentation and design as we have at present – without the same volume of work and time required to produce it. To quote the cliché, time is money.

    To that end, I’m looking at a lot of different avenues, and my team is having a lot of discussions about where we can create efficiencies and more parallel work. I’ll be sharing my thoughts here as we progress, and would love to hear your thoughts on the subject if you have them.

    Category: Uncategorized | Tags: