Archive for July, 2005
I was just chatting with Sean Tierney about how cool VirtualPC is. Sean uses VirtualPC to set up instant development and production servers on his laptop for development and testing, and I use VirtualPC at work to run our Windows-only apps on my Mac.
The premise is simple. You start by creating a virtual computer instance on your existing hard drive, which is nothing more than a simple .vpc file. This file starts at about 40 MB, and will expand dynamically as you install an OS on it and fill it with files.
Next, you boot your virtual PC instance, which greets you with BIOS prompts. You then install your OS. You can then continue to install software and add files to it just like a regular PC. Of course, the best thing to do is to make a backup copy of the PC’s .vpc file AFTER the vanilla OS installation, but BEFORE you add any more software – that way you always have a fresh copy of you chosen OS ready to go (which is how Sean gets his instant server environments).
VirtualPC really has saved my life, since it enables me to be a Mac user in a Windows world. And, of course, because it allows me to install that copy of MechWarrior II (which I found in the back of the closet today) on my G4 in a Windows 98 environment – bwa ha ha!
Most of all, I’m interested to see how MechWarrior II reacts to having 50 times the minimum hardware requirements available at its disposal… :)
If you’re wondering what happened to my white paper on using Java objects with Spring and ColdFusion instead of CFCs, then fear not: it’s still on its way. I’ve been super busy with the J2EE project at my day job, which is finding its way in to my out-of-work hours as well as consuming most of my at-work hours.
In the interim, your patience is appreciated.
We hosted Oracle’s technical sales team this week for a demo of their JDeveloper tool, a tour of the capabilities of their J2EE application server, and an overview of their long term J2EE strategy.
To be honest, the meeting was somewhat of a wash. We’re so heavily invested in Oracle that the likelihood of us not using their J2EE server moving forward is pretty remote, especially when we’re already running an earlier version of the same application server to host our Oracle Forms apps. However, I wanted to see what their strategy was, and better understand some of the more proprietary solutions that they offered, so that I could make intelligent decisions as we flesh out our own J2EE strategy.
JDeveloper was more impressive than I had expected. I’ll summarize my thoughts below.
- Integration with TopLink (Oracle’s object-relational mapping solution) looked good. We’re probably going to go with TopLink instead of Hibernate, because (a) it comes bundled with the app server, (b) it’s one of the oldest and most mature ORM solutions on the market, and (c) it has dedicated support.
- TopLink works with databases other than Oracle – very surprising (to me). I was expecting vendor lock-in.
- JDeveloper had a Java editor that looked like it was trying to catch up with Eclipse. Eclipse has the best Java editor I’ve ever used, and the refactoring tools are superb, so Oracle still has lost of work to do.
- There was an XML editor in JDeveloper that I liked a lot. Based upon what I saw, it allowed you to create graphical representations of your XSDs, and extract XSDs from raw XML documents.
- There’s a database querying tool built in to JDeveloper, which makes me happy since there aren’t many good, free database manipulation tools for the Mac (from what I have seen).
- If you’re supporting Java and stored procs in your environment, JDeveloper has a PL/SQL editor, which makes it a decent all-in-one solution for Java and PL/SQL editing.
- UML and round-trip engineering support looked decent. Not as solid as that available in Rational, but decent. One thing I did like in particular is that your UML diagrams are exported in to your JavaDocs from JDeveloper automatically.
- Sequence diagrams are not supported in JDeveloper, which is a shame (considering that they are so important to the design phase).
- Oracle plans on creating Eclipse plug-ins for a number of JDeveloper’s features. If Oracle is smart, they’ll just move JDeveloper to the Eclipse platform altogether. It’s time for the straggling tool vendors/developers to get with the program and jump on the Eclipse band wagon 100%.
- The JSP/HTML editor did what it was supposed to, but I can’t say I’d ever plan on using those features. The base looks and feels for the JSP apps were uninspiring to say the least.
- JDeveloper is now free, which makes it hard to argue against (regardless of your opinion of the tool).
Oracle’s J2EE server was also quite rich in features and capabilities.
- The server itself is small and light enough to be embedded in JDeveloper for running test apps. I was expecting the server to be a huge beast, so this was a pleasant surprise.
- Oracle’s J2EE strategy seems to be quite open, especially when compared with their traditional approach to technology offerings (which tie you to their platform). ADF (Oracle’s data binding framework and JSF component library) and TopLink are both available as stand alone products if you need them – for a fee, of course, but at least you aren’t trapped with Oracle’s platform forever if you build something using those technologies.
- Oracle has a Business Process Execution Language (BPEL) offering that looks quite interesting. Essentially, BPEL allows you to construct applications declaratively using service components from a service-oriented architecture. BPEL is an open standard, but it was nice to see Oracle offering it with tight integration to visual editing of BPEL processes in JDeveloper. The BPEL engine must be purchased as an add on to the server; this was the only item that fell in to this category (everything else came bundled with the app server).
All in all, I was relieved at the outcome of the meeting. I had expected to see Oracle present a strategy along the lines of “here’s how we intend to force our customers to migrate to J2EE using our proprietary solutions”. Instead, what I saw looked like a solid tool/server offering with plenty of flexibility, along with implementation choices for the customer baked in. To be honest, that sort of approach is much more likely to result in me purchasing a vendor’s products than the traditional strong-arm tactics so favored in the days of yore by the blue chip tech firms.
I’m looking forward to Monday, when I’m meeting with our Director to discuss our options. We’ve seen tools and products from Oracle and Rational, have invested some time learning about Flex, and have set our sites on a long term layered development strategy using J2EE. I know exactly which offerings I’d like to see implemented, but I won’t know the realistic possibilities until after we meet.
Either way, the future’s bright, and I’ve got my shades on.
I spend every Thursday in our Ocoee office, which is our building for call center operations. Basically, almost every marketing effort prior to a customer arriving at one of our properties originates at the Ocoee building. Since our most successful marketing campaigns are web based, I mosey on over to Ocoee each week to spend an hour with the project stakeholders for some face time and issue discussion. We then have break out sessions for items that require additional attention.
After this week’s meeting, I scheduled a half hour to take Paul (my boss, our Director of Software Development) through a very brief demo of Flex. It just so happened that we held the meeting in the office of one of our Oracle Forms teams, and before I knew it I was presenting to the entire team on a projector!
I certainly wasn’t prepared to go in to great detail, but we had an excellent run through using the Flex Explorer application. We looked at the examples and discussed the code implementation, as well as the viability of the Flash platform and the benefits of a plug-in based model over a traditional web architecture. We also talked about the capabilities of an RIA compared to those of a web site or fat client, especially the benefits of having Flash as the driving technology.
Some excellent questions were raised during the demo. Would the product work in Oracle’s app server? Yes, it was recently certified to do just that. How would we talk to the database? Through logical layers of OO code, of course! What’s the scalability like? Better than Oracle Forms… :)
My next step is to coordinate with our other team members to round out a formal Flex presentation for Paul and the rest of the team. I have about five pages of outlines for a PPT presentation, and I’ll be applying my usual level of slide-fu to drive the points home. I see fantastic potential in this technology, and a solid match for what I believe we should be doing with our software. Now, it’s just a matter of investigating and justifying the ROI, and communicating the facts to senior management so that an educated decision can be made.
I must say, I’m really looking forward to the outcome of this process. I’ve wanted our entire software team to move to Java development for over three years; the benefits of J2EE development over our traditional technology choices are vast and undeniable. The way things are going, it might finally have a chance of happening, and that gives me great hope for the future.
Okay, so Rational Rose may be the old version of what Rational is doing these days – but I couldn’t resist the play on words. :)
The event I am referring to in this blog title is, of course, the on-site demo we had today from Rational Software (who are now a division of IBM). As part of our J2EE project, I’ve been tasked with finding tools that will support better requirements gathering for OO development, the creation of appropriate architecture documentation, and tighter trackability and integration between the two.
There’s really only one company that comes to mind in this problem space – and that’s Rational Software. Not only did this company stem from the formal bodies that invented the UML, but they’ve also created a very logical, well organized, and customizable SDLC for any type of software development (called the Rational Unified Process, or RUP). While this is an achievement in itself, it would be somewhat difficult to implement without the appropriate tools. Thankfully, Rational has also set the standard for tool integration and compliance with the techniques in their SDLC by creating their own tool suite.
Today’s on-site lasted about two and a half hours, and covered a top-level look at RUP and Rational’s suite of products: a soup-to-nuts, tightly integrated suite that creates automation and trackability at every level of the process.
A few items I found particularly interesting are listed below.
1) Prior to the demo, I was under the impression that you either adopted RUP 100% or you didn’t adopt it at all. This was a false assumption. Although RUP is incredibly comprehensive, and comes with tons of how-tos, samples, and process documentation, you can take or leave all the parts of it as you see fit. In fact, Rational encourages you to customize it to your needs.
2) Rational’s requirements gathering tools begin with business process modeling (whether those processes are manual and you just want to document them, or if you are planning on automating them through software), and then move seamlessly through requirements gathering, tracking change to requirements, implementing requirements in architecture, and then associating those requirements with the final code. You can literally track the origin of a system code artifact all the way back to the requirement that spawned it – astonishing.
3) Their entire toolset is moving to Eclipse (duh – they’re owned by IBM). My surprise was that their tools are being adjusted to run in Linux, which means they might also work natively in OS X (yay!).
4) The requirements tools sync up with Word, but save the atomic items (such as the individual requirement bullets) to a relational DB. This all happens seamlessly through macro integration inside Word. So, you get the flexibility of a rich word processor with the robust data tracking and querying of an RDBMS. Another solid win for the product.
Right off the bat, I can see the advantage of our utilization of Rational’s business process modeling tools. If there is one thing I wish we had, it would be comprehensive business process documentation. Westgate Resorts is in the timeshare industry, and like most industries, it comes with jargon and terms that you’d never understand fully unless you had worked in the business. After 8 years with the company, I’ve been lucky enough to work in a lot of different areas and bring that knowledge with me to our IT department, but our newly hired programmers are not so lucky. Being able to give them business docs to study would be a great orientation exercise.
I’m also a huge fan of tools that integrate. This is one of the reasons I liked Adalon when we first started using it; work on one part of the process acts as a springboard for the next part of the process, and Rational follow suit. To be honest, if I was not so fortunate as to have good tools, and had to sync all my own documents, I’d go mad. Imagine doing process flows in Visio, requirements gathering in Word, project management in MS Project, UML modeling in another tool, and development in Eclipse with no automated links between tools/steps. If I had to do that, I probably wouldn’t even bother.
While this might sound like a prima donna stance, the bottom line is that all of those steps take a lot of effort, and there’s little ROI if you’re starting from scratch again at each step. If I can’t use my Visio diagram as a springboard to my UML modeling, and my UML model to springboard my code, and I don’t get back and forth integration between all the documents as things change, then I’ll end up spending more time syncing my documents than actually enjoying the benefits of having them in the first place. I’d end up getting less work done at the end of the day, which would be wasteful and counter productive. Tools like Rational might seem pricey, but in a team environment of market-priced software engineers, the ROI of an integrated suite quickly becomes apparent.
So, while no decisions were made today, I definitely liked what I saw (and while I’m leading the process, I’m not the only decision maker). Also, the fact that Rational provides such tight links between requirements and code got me to thinking that we could potentially shorten our architecture process (as I was griping about earlier this week) by using more visual representations and fewer written descriptions of requirement implementation. To quote the cliché, a picture says a thousand words – especially when the picture is a UML sequence diagram! :)
I’ll be posting more thoughts on Rational tools and RUP as we delve deeper. We’ve also got an Oracle demo of JDeveloper and Oracle’s 10g J2EE Application Server next week, so keep your eyes peeled.
My team has been evolving our system design and technology strategy over the last nine months. We’ve been going from CF/Fusebox 3 with CFCs to CF/Fusebox 4 with Java.
Overall, I’m extremely happy with the decision, and my developers have echoed that sentiment. However, we’ve been having discussions recently about how much architecture we do prior to development, and how much detail we put in to that architecture.
Here’s a run down of our development process once requirements gathering comes to an end. Our SDLC is based upon the Fusebox Lifecycle Process, or FLiP (more about this in my blog post on my FLiP presentation at CFUNITED 2005).
1) An architect sits down and goes over the prototype and requirements document, and begins to explore their ideas for implementation.
2) The architect decomposes the wireframe in Adalon (our Fusebox design tool) and creates a skeleton for the Controller in the Fusebox app. This is subjected to peer review by the team, during a presentation by the architect.
3) The architect fleshes out the Fusebox skeleton with Fuses and FuseActions, while simultaneously developing the API for the Java delegate that the web site will use to interact with any APIs (system specific or third party) needed by the application. (We use the delegate object to hide the actual APIs being used, which allows us to switch APIs on the fly with minimal impact on the site itself). This is also briefly reviewed by peers.
4) The architect develops the object model. Right now, we’re doing this by stubbing out the interfaces and concrete classes, and documenting their functionality and responsibilities in JavaDocs. Once again, peer review takes place when this is complete.
5) Finally, the architecture documentation is delivered to the developers for implementation. They code off of the docs with speed and ease.
If this sounds like a lot of work up front, it is – and with good reason. As a manager of a development team, I have to strike a careful balance between total chaos and over-documentation/over-engineering. Imagine these two scenarios, on polar opposites of the spectrum.
I call the first scenario “Developer Code-Fest.” In this scenario, the development team is made up of top-flight developers, all hopped up on caffeine and Slashdot, and spewing catch phrases like “agile” and “XP” when they really mean “no documentation” and “by the seat of my Jolt-stained pants.” The business requirements are timidly slid under the door of this rabid geek party by the Project Manager. All hell breaks loose. As the sound of clacking keys reaches a crescendo, the developers froth at the mouth uncontrollably, with no thoughts of forward engineering having ever crossed their minds. Upon completion of their masterpiece, they ritualistically torch the requirements document with lighter fluid in a metal trash can, and throw the application over the wall to QA. When the project manager asks them where the technical documentation is, they go back and add just enough to satisfy “the pointy-haired bosses” – after all, they can’t remember what the hell they programmed, and if they needed to make changes, they’d just go back and read the code, right?
I call the second scenario “Architect Yourself Into Oblivion.” In this scenario, the project arrives fully documented to the thoughtful and bespectacled architect, who regards them with the same level of interest as a biologist who has just discovered a new strain of llama. After three weeks of reviewing the requirements and quietly sipping chamomile tea, the architect decides that they need an additional six weeks to investigate the cleanest design pattern for implementing the “log out” process. A further fifteen business days are lost discussing the benefits of implementing persistence using strict ANSI-92 compliant SQL, as opposed to Oracle’s slightly specialized version, which (horror!) would never be portable to DB2. Eight months later, the architect is halfway through the design process and has written seven tomes of documentation, which, while resplendent in its undeniable design simplicity and elegance, can’t actually be implemented in any of the languages presently available to mankind. The project runs out of money, everybody gets fired, and the entire island of Puerto Rico is able to power itself for six months off of the BTUs generated by burning the architecture documentation.
Of course, I slightly exaggerated both scenarios, but the point gets made: where do you draw the line between the no documentation/design scenario and over-architecture?
I guess I’ll state the case another way to reduce the scope of that question. Here are the things that are important to me as a manager, with the reasons why.
Documentation Before Coding
I’ve never seen a developer do a decent job of documenting a system after it was programmed. The docs always assumed a level of knowledge of the code that you could only have had if (a) you programmed the thing in the first place, or (b) you read all the code. As a result, I would prefer that the system be documented ahead of time, and (preferably) actually implemented primarily by people who did not write the design documentation. The theory behind this is that if the docs don’t make sense by themselves, the system could never be implemented in the first place.
Peer Review of Design Before Coding
I want everything to be looked at by at least two people. Nobody is as good as everybody put together. I know this from experience, because my team has never left a peer review session without a cleaner design coming out of the conversation. (This also applies to post-implementation code review in equal measure.)
Ease of Use, Ease of Change
If my developers finally win the lottery pool, split the winnings, and move to Tahiti, I need sufficient documentation for my new development team to understand the old systems quickly at the usage level (i.e. make this work) and the implementation level (i.e. add this feature without breaking what’s already there). I also need the ability for different developers to work on each other’s code, without one person being the guru on specific pieces that they built; gurus become bottlenecks to their creations, and end up being bound to the system they built rather than evolving their skills with new systems. Finally, I need third party teams to be able to interact with our APIs without needing to understand the complex intricacies of the API’s innards (again, the usage docs satisfy this criteria).
To date, I haven’t found a good way to meet these goals without doing a decent amount of up-front design work. On the flip side, most of our apps designed in this fashion have been great successes, and have been relatively well-designed and easy to maintain, so the payoff has been there. As for development timelines? They aren’t always the shortest, but they are far from unreasonable – and after all, the biggest overall system cost/wasted future development effort comes from bad design or lack of documentation, so ultimately you save time and money by doing more work on the front end.
So, you might wonder why the hell I’m looking to change anything if what we have works. The answer is simple: if there is a shorter way to get the same results, I’d like to be using it. I’m a huge fan of process improvement, and finding new ways to do old things better. I guess what I would like to see is more agile approaches to our development process, more parallel architecture, and the same level of forward documentation and design as we have at present – without the same volume of work and time required to produce it. To quote the cliché, time is money.
To that end, I’m looking at a lot of different avenues, and my team is having a lot of discussions about where we can create efficiencies and more parallel work. I’ll be sharing my thoughts here as we progress, and would love to hear your thoughts on the subject if you have them.
As part of the J2EE project I’m heading up, we’re evaluating front end options. Presently, our Oracle applications are developed using Oracle Forms 6i for front end interaction.
Personally, I have never been a fan of Forms for the following reasons:
1) With Forms, you’re bound to Oracle – plain and simple. Lack of vendor/tool flexibility is bad for enterprise software development.
2) The UI functionality is restrictive. It does data input/validation pretty well, but interactive/easy to use UI can be a challenge or imposibility depending upon what you want to do with it.
3) I find the client/server model in Forms extremely strange: you get a heavy client (Java applet) and a heavy server process (in-memory representation of the form on the server). Shouldn’t at least one side of the connection get a break? (I won’t share any numbers, but the scalability hasn’t been stellar based upon our user-to-server ratio).
4) The Forms framework seems to thrive on absolutely no abstraction between your view and your data model. The tools for building Forms literally lead you to creating direct bindings between UI elements and database columns, and to a certain degree require business logic code to be embedded in the front end.
As a result, we’ve been looking elsewhere for front end choices. Oracle is heading toward a JSF-based strategy for their tool and application suite, so that naturally rose as an option. As for myself, I believe very strongly in the future of RIAs (Rich Internet Applications), so I threw Flex in to the mix.
So, why don’t I like JSF? A few reasons, backed up by many years of web development experience:
2) JSF is supposed to figure out your browsing environment and render the components accordingly. This is a nice goal, in the same way that contestants at beauty pageants wishing for world peace is a nice goal. I say this because I’ve never seen a tool that actually supported mutiple browsers seamlessly. (I even have it on good authority from a vendor using JSF that he has already discovered that this goal is a fallacy; this vendor only supports two browsers for his app, and had issues with some simple JSF controls that they still haven’t figured out – and he used the standard JSF tag set.)
The bottom line is, there is no way to guarantee the enforcement of web standards. By basing a framework on such a house of cards, I almost feel like JSF is a lost cause before it even begins. There are some nice documents on the W3C web site describing how browsers should implement web technologies, but no browser actually works the same way. They can’t, because even specifications are open to interpretation by the developers implementing them, so I doubt that the situation will ever be resolved. (If you want a great dissertation on this, I suggest that you check out Designing with Web Standards by the great Jeffrey Zeldman.)
With Flex, there is one dependency: the Flash player. It’s been around for years, and everybody has it. It works in old browsers. It works in new browsers. It works on every major OS on the planet. It runs in phones. It runs in PDAs. It is developed by one company, to one specification. This means that if I develop Flex applications, with custom UI controls and all the other bells and whistles, they will always work the same way, whether my client is Netscape 4 or IE 8 (due in 2047), running in Slackware on a cellphone or Longhorn on a laptop.
Some of you may be crying foul: “But Max, you just said that vendor reliance is a bad thing! Surely you would be relying on Macromedia if you went with Flex?” And my response would be, you are right. However, Flex has a few things on its side: a very loosely coupled programming model, and an excellent set of capabilities that would serve us well for many years. With the programming model alone, it would be simple to retire Flex in the presentation tier after many years of service (if such a necessity arose), while guaranteeing our development investment at the business tier and beyond. And let’s be honest: the business tier is where the money is.
Ultimately, you have to accept the fact that your front end is destined to change as new UI technology emerges; the trick is to get the most mileage out of it before that day comes. With Flex offering J2EE and .NET integration out of the box, plus an incredibly attractive and powerful UI layer, I would bet my bottom dollar that we’ll get plenty of use out of Flex before we have to find something else for our users to poke with their mouses.
And why not go with OpenLaszlo (Flex’s open source second cousin)? Well, I’ve been playing with Laszlo for some time, and I like it – but it’s not vendor supported, and it lacks the polish that Macromedia has lovingly applied to Flex. Plus, I wouldn’t be entirely comfortable building an enterprise around a product that does not have a team of paid support engineers behind it. Not to mention, Flex feels much more mature than OpenLaszlo, has nice development tools (sorry, IBM – Flex Builder knocks the pants off of the Laszlo IDE for Eclipse), and there are even excellent books about Flex.
Oh, and Flex’s default component set is much prettier. Sure, you could spend time creating custom look and feels for Laszlo, but why bother when the Halo LAF in Flex looks so darn good?
Have some thoughts on JSF, Flex, Laszlo, or this blog post? Hit me up on the comments, my friend.
I’ve been working in the software development industry for some time now, and source control is a huge piece of the puzzle for effective change management and development. However, I’ve never actually had to administrate source control – that was always within the realm of our sys admin team.
However, we’re now evaluating source control systems for our new J2EE project at Westgate Resorts, and I wouldn’t mind having something set up on my laptop to use on my many personal projects. Naturally, I’ve been looking at Subversion for a while, since it’s essentially CVS with all the bugs fixed and the features rounded out (and CVS sure wasn’t bad). I’ve also worked with development teams who used the Eclipse CVS plug-in, and it makes life very easy indeed.
So where do I start? With Google, of course! For those of you who may be new to Subversion or in the same predicament as I found myself, I thought I’d post the links and resources I found useful.
Subversion Book (Free)
This is an O’Reilly Subversion book that is being given away for free. I downloaded the PDF version and stored it with all of my other PDF manuals on my laptop. I read the first four chapters in about an hour, and I must say that it is excellently written. It also covers the concepts for trunks, branches, merging, and repositories for those who are new to source control. I’d even recommend it as an excellent primer on these concepts for any source control implementation.
Eclipse Plug-In for Subversion
I haven’t installed this yet, but here it is. I’m sure that (as with most Eclipse plug-ins for popular software) it will be excellent.
Not using Eclipse, but need a GUI client for OS X? I suggest SvnX (I think this is pronounced “sphinx”), a GUI tool for SVN (the commonly used short name for Subversion). I also found that SVN installation is pretty darn easy if you use Fink (a package and dependency management tool for open source software utilities on OS X, similar to rpm).
That should be enough to get anybody started. The book covers history, concepts, software installation, administration, and extension through development APIs, so that’s your best first bet.
Good luck and let me know what you think of Subversion once you get it running!
Apple produced their quarterly report this week, and was met with investor enthusiasm across the market.
If you ask me, it’s about frickin’ time.
Sure, Apple’s stock has sky-rocketed over the last two years, but it was always based on industry analyst opinions on the iPod. Nobody will deny that the iPod is the driving factor in Apple’s success – they split off an entire business unit for it in their quarterly reports, no less – but for so long, that’s been the only factor that Apple has been measured upon by the industry.
For example, last quarter, Apple had a stunning set of results across the business. However, since the iPod didn’t do quite as well as the analysts had hoped, the stock dropped. I thought this was ridiculous. The rest of the business was doing incredibly well, and some of the obvious results of their strategy for the last three years were swinging in to motion.
It seems that this quarter, analysts looked at the big picture. The “halo effect” showed enough oomph for the analysts to admit that it was viable (duh). Mac sales were incredible. Laptop sales were incredible (I contributed to one of them :) ). iTunes is set to hit half a billion songs in short order. Retail stores are growing and continue to be profitable. And there was a huge R&D line item that indicates great things to come (of course, Apple remains tight-lipped about future product strategy, as they should).
Personally, I’m hoping for a few gems from Apple in the next few years. None of these suggestions is based on actual reality, but more the fantastic nature of my overactive imagination.
1) Apple becomes a mobile carrier, and offers top-notch wireless broadband service. Fees will be extortionate, but customers will flock to Apple’s brand like the iPod-crazed monkeys that Apple’s marketing department has trained them to be.
2) The iPod is reborn as a phone, PDA, video player, Sirius satellite music player, and game machine (in addition to its present feature set). It connects to Apple’s broadband network, from which you can download movies, games, and whatever else Apple decides you need to buy. It has Bluetooth so that you can connect to the Internet wirelessly from your laptop using Apple’s broadband cellular network wiithout even needing to take the iPod off of your belt buckle.
3) Some kind of evolution happens to the Mac Mini which turns it in to the ultimate home entertainment system. I’ll leave the details to the Apple engineers, but undoubtedly it will be very cool, integrate seamlessly with the Internet buying experience, and will trounce the pants off of the embarrasing failure known as the Windows Media Center PC.
Hey, I can dream…
I still haven’t had a chance to test Adalon on OS X Tiger, but I’m pretty sure my friend Jeff in South Florida is using it and hasn’t had any problems. I blogged previously that Sean Corfield was having issues with it, and that I was going to investigate it and post my findings.
I’m emailing Jeff now, so maybe his response can shed some light on the issue.
UPDATE: Well, I just tried it out, and it looks like Adalon runs fine on Tiger. I copied my installation on OS X Panther (on my laptop) over to Tiger 10.4.1 (on my desktop), and it ran without issue. I emailed Sean to see if he gets any output to Console when booting Adalon, which might help diagnose the problem. I’m also using the 1.4.2_07-215 JVM, which may be different to Sean’s if he’s using the newer version 1.5 JVM (which I haven’t downloaded yet).
Most Popular Yelling
- Scrolling Large Data Sets in Flex Charts (41)
- Configuring Tomcat SSL Client/Server Authentication (28)
- Fixing "Bluetooth audio failed" Error Message on Mac OS X with Sony DR-BT50 Headphones (16)
- How To Become A Software Engineer/Programmer (15)
- Using Axis's wsdl2java in a Maven Build (13)
- Speak and Spell Samples (13)
- An Objective-C Tutorial for Enterprise Java Programmers (12)
- On A Personal Note (10)
- Abandoning ColdFusion? (9)
- Adobe Says: "Thousands of Developers are using CF 8" (9)
Stuff I Like
- October 2011
- August 2011
- April 2011
- March 2011
- January 2011
- December 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- January 2007
- December 2006
- August 2006
- July 2006
- June 2006
- April 2006
- February 2006
- December 2005
- November 2005
- October 2005
- August 2005
- July 2005
- June 2005