Application Deathmatch Desktop vs. Web
In this corner, wearing the red and yellow trunks you have your typical desktop application, and in the opposite corner, wearing the red, white, and blue trunks you have the same functionality in a web application. Now you both know the rules, at the sound of the bell, come out with your hands up ready to fight!
That is almost the situation in most corporate IS departments right now. Even though it is still early, it looks like the web application has taken an early lead. Now, let me state my bias. I am a web application developer, so I should have a bias toward web applications, right? Not exactly. I also love Java for desktop applications because of its portability. Let's evaluate some of the pros and cons for both approaches and we can decide later who will be at the center of the ring, battered and bloody, but victorious.
The Underdog – Desktop Applications
Why the underdog, well, what most people think of as desktop applications have been with us for about 20 – 30 years, depending on whether you are talking to people from XEROX PARC, Apple, or Microsoft. The tale of the tape for the desktop application is that it is extremely flexible in ways that its opponent, the web application can never be. The desktop application can use the computer's hardware in almost any way imaginable.
The only barrier to development is the operating system, which is usually written in such a way as to allow the developer as much flexibility as possible without giving them the ability to bring the entire system down. The application developer's language of choice is usually one of the following: C, C++, C#, Visual Basic, Obj C, Java, Python, or sometimes if the developer is in an arcane mood, Perl with TK. C# and Java are different from the others and have different pros and cons than the other languages listed, but we'll get into that in a minute.
Notice a lot of C's in there? That is because C is the language from which almost every modern language has derived. It is often difficult to develop using C because GUIs and the like weren't around when it was developed so it takes a lot of code to create the pretty images we take for granted every day when we boot up our computers. It isn't object oriented, so it is more difficult to produce clean code, which has spawned the object oriented C languages, C++ and Obj C. These language allow a developer to easily call and implement modular components to develop an interface for example. The major flexibility of desktop applications comes in where a developer wants to present something in a non-standard way, like a box with three rounded edges for example. The developer can eschew the standard window and write his own window look and feel. Most modern operating systems frown on this, but it can be done. Also, floating content on top of the window, or on top of text can be easily done.
The killer right cross of the desktop application is the speed with which it can execute code. The application developer can count on the full attention of the computer's processor, video card, and sound circuitry if they so desire, at least for a while anyway. This makes it highly improbable that an application like Photoshop would be developed as a web application anytime soon. Also, that access to resources causes issues. Office is about 300 MB, Creative Suite is almost a gigabyte with everything installed, and really doesn't get up to speed unless the user has 1 GB of RAM installed. Users don't really want their applications to take up all of their drive space, they would rather fill it up with pictures, music, etc…
The desktop application's right cross happens to coincide with the web applications weakness, that it can only have what system resources are available to the browser. Flash and Java are two ways to get around that, but not completely.
The developer working on the desktop application does have a couple of headaches though. When writing an application the developer will do it on a single computer more than likely, and that computer will run a single operating system, perhaps two, so the developer won't know if it will run on a competing architecture. This isn't often a problem since most of the world is using Windows on an x86 based machine right? Wrong, as the future unfolds, it is likely that a de-homogenized computer environment will become the norm again with many different architectures needing to communicate with each other. I say this because of the ever growing use of computers in areas where they aren't traditionally used. Media center devices for example. Microsoft has rolled out an entire operating system for this purpose, but there will probably be intense competition on this front, making it difficult for application developers to create software that will work on all platforms.
Enter Java. Java was developed so that it will run on almost anything. It has applications from cell phones to huge multi-cpu servers. It allows a developer to write a desktop application and have it run on almost any platform with little to no changes. Even down to the graphical user interface. It is my language of choice when developing desktop apps, however it is limited in that it performs slowly on older systems, which will continue to be less of a problem. This would seem to be the ideal language with which to develop since it runs on everything to which the Java Runtime has been ported. Microsoft has its version of Java called C#. C# has some of the ease of use of Java, but it is limited to Windows machines, which makes one wonder why the software wouldn't be developed in Windows' native C++ since it would run faster that way anyway. I don't see any practical use for C# as a desktop application development language, but it does have applications in web application development….
Desktops ultimately can run software faster, have far more vast resources, such as the entire free space of the hard drive to store data, can use 3D hardware, etc… The largest detractor from the success of custom software development on the desktop is the deployment nightmare. When rolling out a new piece of desktop software, you have to ship it, or at best have users download it and install that piece of software on their computers. Then that user has a version running on their system that they can use at anytime. Great!
The trouble starts when the developer finds a bug in the software. The developer patches the bug and then distributes the revised version of the application to the affected parties. After about 10 rounds of this, the users are all running different versions of the software, and the developer is fielding all manner of trouble calls only to find out that the version they are running has never been updated. There are complicated and sometimes expensive solutions to this problem with desktop application development. To make matters even more insane, most modern enterprise desktop applications are using a database server, so all of these desktop apps are using a database to store their information, why would you write a desktop application again? The reasons are more of what we touched on above. Users like to print and have the print out look like what was on the screen. That is something that desktop applications do very well where web browsers leave much to be desired. Ok enough about desktop apps.
The Favorite – Web Applications
We've already done a good job of talking about the issues with current web application development. There are still, however a few we haven't talked about. Connectivity is a biggie. If a user isn't connected to the internet or local network, they can't use a web application. Now, in most business environments, users won't be off-line for more than a few minutes so this isn't as much of an issue as it used to be, but for home computer users, this is a big deal.
What's so good about web apps then? Well, the distribution is much simpler. If there is a new version of an application it is already distributed as soon as the developer puts it up in production. There are no messy distribution issues, and the calls the developer has to field will almost always be due to his sloppy coding at 2 am instead of very simple user issues.
Since there is often very limited ways in which the user can interact with the browser, there is a shorter learning curve, most web saavy users understand the concept of hyperlinks, etc… If the application is coded to standard XHTML, then it is every bit as portable as Java, and in the future it would even be portable to places where there is no Java runtime such as internet applicances. Another huge benefit is that users can use the processing power of huge complicated servers instead of being limited to their small CPU.
Web apps can do all processing in the back end, so the speed of the application is limited to how much server dough the company wants to cough up. Storage is another benefit. While storage on the user's machine may be limited to a few kilobytes in cookies, storage for that user on the server is up to the web developer's discretion. If everything is on the server, and the server is backed up, the user is off the hook if his computer succumbs to viruses and they have to reformat.
Rights management, this is a biggie. If users never have posession of the actual software, then they can not redistribute it. The owner of the software has the ultimate right to allow that user to log in or not. If someone were to hack into the user rights, it would be much more easily traced than someone handing a disk to another person. Because of the ease of distribution and the lack of piracy cost drops drastically. In the end Microsoft is right, and web applications will be the wave of the future. They are wrong however, in believing that they will control that future. Macromedia with Flash and Flex will be very valid contenders, especially if you compare ASP.net with ColdFusion. I don't know many developers if everything else is equal that would choose ASP.net over ColdFusion.
Another benefit is that taking applications off of the users' computer frees up system resources to run their operating system in a sandbox. Since the OS will almost never change and the applications they are running will not be on their hard drive, the OS can be read only for the most part and reside in RAM. Viruses couldn't do much more than knock the user offline, and that could be remedied with a power-cycle and the user would be right back where they were. With advances in browser technologies, the 3D hardware can become available to web developers enabling a new level of online gaming, and a new form of virtual office and business collaboration. Ultimately we will be running gaussian blurs on our photo images online. Connectivity is an issue now, but in five years or so, there will be blanket wireless coverage in most major cities.
It looks like web applications are delivering a killer combo to desktop applications with them against the ropes. Which will win is ultimately up to the hackers and their undermining of the public's trust in the internet and the imaginations of web application developers.