Today is a good day to code

IE7 Using CURI to Handle URI Objects

Posted: December 31st, 1969 | Author: | Filed under: Microsoft, Uncategorized | Tags: | No Comments »

IE7 Using CURI to Handle URI Objects

Picture of Irv Owens Web DeveloperWhen some people think of the issues plaguing much of Microsoft's software, they often think that it is the result of lazy coding. Sometimes that is the reason there are issues, other times it could be the deadlines the team had to meet, or it could be that no one actually thought that the potential bug could be a real issue. One of the issues that web developers have had to work around since IE 5 came out was the 2KB limit on URL strings. Another issue was that hackers had the ability to send a malformed URL string to IE to fool it into thinking that their site was a trusted site. Then they could wreak havoc on your computer by sending IE awful Active X commands to trash your system.

IE 7 so far doesn't look like it has a bunch of sexy features, but under the hood Microsoft is really working hard on this release. From the partial standards compliance to running IE under a reduced permissions sandbox if you will, they are really working hard to try to get people to trust the internet again. If that wasn't enough, Microsoft is building tools into IE to detect if a site is on a list of “bad” sites that Microsoft will keep. But one of the coolest enhancements to me is the CURI object. Basically it is a struct that allows a programmer to handle it as such. Since it is not a string, it is possible to validate the CGI variables apart from the rest of the URI. If someone were to try to slip a malformed URI down the pipe, the validation of that CGI string would fail as would the attack. In IE 5 and later, the CGI string was handled as a string and passed around the code. String variables give the developer limited abilities to validate parts over other parts. There are many sub-string functions and libraries out there, many are built into the development languages, but they cost the developer in performance. Was Microsoft lazy, who can say, but it seems as though they are working hard to make IE 7 everything that 6 should have been.

Microsoft's IE Blog


The MSN Bot

Posted: December 31st, 1969 | Author: | Filed under: Microsoft, Uncategorized | Tags: | No Comments »

The MSN Bot

Picture of Irv Owens Web DeveloperA few days ago, I noticed that the MSN bot had been hitting my RSS feed more than any sane bot should. The MSN search team at Microsoft have said that they are experimenting with the web droid. Some publishers are complaining about this because it seems that in some cases this is causing their bandwidth to average over the amount guaranteed in their hosting agreements.

While having to pay more for hosting can be a real pain, MSN's propensity to recognize RSS feeds and keep checking them for updates is a good feature, seeing as some bloggers post all the time, and blog readers want up to date information. Services that use the rpc-ping system, like Technorati typically do a very good job of crawling to get the latest blogs, only when there is an update. Perhaps Microsoft could implement this sort of function into MSN search, although it might be difficult with only 5.5% of the market to get people to actually use it.

Jeremy Zawdony – Dear MSN Bot


Possible Apple and Google iTunes Deal

Posted: December 31st, 1969 | Author: | Filed under: Apple, Google, Microsoft, Uncategorized | Tags: , , | No Comments »

Possible Apple and Google iTunes Deal

Picture of Irv Owens Web DeveloperI am really ambivalent on the possibility of a deal between Google and Apple to help the search company figure out how to deploy a music solution similar to Yahoo's launch. Google hasn't been making software for Macintoshes. I am still waiting for Google to release Google Earth for the Mac. It shouldn't be that hard, since they already have a direct 3d implementation. I could see if it were, perhaps using Direct X, or using Active X controls to display it in the browser, but this is a standalone program. Does Google really care about the Apple users out there? On the flip side, there is a really strong business case for the deal.

If Google were to feature songs in the iTunes music store it would be possible for them to expand their iPod penetration even further than the amazing levels it has reached. Believe it or not, the numbers say that the once rabid iPod acquisition rate has begun to plateau, and profitibility of the devices has been diluted by the proliferation of the iPod Shuffle. Still, the problem is that most of the people I know who live in the middle and south-east of the country don't really understand the iPod, podcasting, napster, or anything. Many of them still frequent CD stores. The iPod is mainstream in America's big cities, but it is still fringe on main street America. Google has managed to penetrate much of that market, to a much higher level than the iPod, and Google is a trusted name, much the way Westinghouse was in the fifties and sixties. For Apple to tie itself to Google's image can only be a good thing.

Google, however should take care. Any such deal is going to further Microsoft's already boiling ire. They aren't ready for all-out war with Microsoft at this time, no matter how rich they are. Google is still very dependent upon Microsoft's technology as they have the OS market. When the Google OS comes out…. Sporting a thin client Linux system with a slick interface and applications delivered over the web, then Google will be ready. While they are probably working on something like this behind the scenes, they are wisely not parading it in front of Microsoft. Still, as paranoid as Microsoft is, Google should not tie itself too much to the rival Apple, although it would be better for customers, me in particular as a Mac user, it may not be wise to wake a sleeping giant by shouting in their ear.


Whirlwinds of Code and Forming Design Patterns

Posted: December 31st, 1969 | Author: | Filed under: ColdFusion, Microsoft, Programming, Uncategorized | Tags: , , | No Comments »

Whirlwinds of Code and Forming Design Patterns

Picture of Irv Owens Web DeveloperOne of the biggest issues with understanding object oriented programming is getting over its associated terminology. Most developers whether they realize it or not have formed design patterns, and use them all the time. If an established developer were to look at their code, they would often see that their application was broken down into data access and storage components, display components, and the logic that allows them to successfully communicate. When asked to bring a system from one application to another, they can usually do it with little to no modification. This is the idea behind design patterns.

In an interview yesterday I realized that I had better take a more aggressive look at design patterns. Understanding the terminology may be tough, but it is an excellent way to communicate an application's business needs, especially using UML, as well as getting down to the lowest level of describing the objects that will comprise your frameworks. I have always tried to get a firm handle on design patterns, but they have largely eluded me. I have understood simple systems like breaking your code into well defined model-view-controller layers, and using messaging to communicate between layers, but I have never really been able to understand the more advanced concepts. In the interview I noticed that I was designing with some object oriented concepts by using Fusebox, even if I didn't know what to call them, but ignoring some of the more specific ones.

Programming is often like the game Dark Cloud for the PlayStation 2. For those who haven't played the game, it is a role playing game in which the player travels around their world trying to re-assemble their world, which was scattered by an evil genie. The player is provided a weapon which is pretty weak to start with, and by travelling around the world, they can gain objects which make their weapon stronger. When they get enough objects, they can merge their objects permanently into the weapon, increasing its effectiveness. They then begin the process anew, adding objects to the newly enhanced weapon until they can merge it again. When you have better weaponry, the player can gain pieces of their world more quickly and can assemble more complex worlds in shorter time. It is like this with programming. Often I feel as though code is whirling around me and once I have that “aha!” moment it merges and becomes something solid for me that enables me to take the next step. Building large applications has caused me to develop different frameworks or APIs for me to use. For example, most of my applications require search, so I have developed a pretty thorough search framework, made up of components, that can be moved with little modification. I wouldn't have known that it was that, but it is.

Today, or last night rather, I had one of those “aha!” moments, the moments we all write software for. I was finally able to put names to some of what I was doing. Now that I am beginning to understand, I can see why it would be hard for experienced object oriented developers to explain to procedural developers how to do OO design. You just begin to think differently. I can see about ten areas in which I can improve my search API / framework to make it more portable. The hardest part is finding the dragon, slaying it is easy. In other words, associating design patterns to what you are doing is hard, once you can put names to faces so to speak, the rest is simple. For a while I could never understand why people were so excited about Microsoft enabling the use of C# in SQL Server 2005. But now I can see, you can create an entire data access framework all on the database server, abstracting the underlying database and its queries from the application. It would be possible, in a web application, to completely separate the model from the controller and view layers. This has huge benefits in code maintenance because you could have any number of applications using the data access framework through web services.

What really dragged it together for me was why Java was so tough. I realized that it was tough because my mostly procedural mind was trying to write a program thinking about what each class should do instead of things like what does this class know about itself, what is it's purpose. How do the methods inside it work to help it achieve its purpose. In short I was trying to write a simple program, instead of thinking about a toolset to help me achieve my goal. With Java, you have to diagram, you have to chart or you will just get lost. Even objective-c makes more sense, with over-riding the init method for objects. These things didn't make sense before, but now I am getting it. I still have a long way to go, but I think I'll start working with Mach II, even if there is a performance hit. That is a little more OO than Fusebox, but Fusebox is a great foundation for it.

All that being said, there are still some instances where you can go too far with data encapsulization. For example if you had a table that had contact information in it. You wouldn't want to return each row, create a struct out of it, then set an iterator method to go through each struct, then each element of the structs, at least not in ColdFusion. Iterating over a query is something that the built-in elements of ColdFusion do fairly well anyway so building frameworks to disassemble a query object, then re-assemble it as a bunch of structs is probably an un-necessary layer of complexity for most applications. So like anything else, discretion is required. Now I'm ready to tackle the UML book and hopefully figure out how to use that nifty ID3 tag reader framework for Objective-C that I downloaded a while back and couldn't quite figure out how to use. I've got Macintosh applications that need to be developed.

Here are a few of the sites that helped me get to the “aha!” moment.
Macromedia exerpt from 'Design Patterns'
ColdFusion object factories, the Composition CFC
Introduction to the Mach-II framework


The Microsoft Trinity

Posted: December 31st, 1969 | Author: | Filed under: Apple, Companies, Google, Microsoft, Sun Microsystems, Uncategorized | Tags: , , , | No Comments »

The Microsoft Trinity

Picture of Irv Owens Web DeveloperThis maneuver makes sense in the business world, but it has yet to be seen if Microsoft can truly let these vast entities they have created within the company function independently enough to behave like companies. I think that Microsoft didn't go far enough with the reorganization. It may have been better if they had broken the company up further.

The MSN group should remain on its own, however it should have the full backing and cooperation of the other units. They should focus on adding more web functionality to their applications, like automatic backups for Word, Excel, and PowerPoint to a virtual drive so that you could work on things on the road and away from your personal computer.

What Microsoft has done may improve their ability to react to Google, but that is the operative word, “react.” They will not gain a greater ability to innovate. Their organization won't allow it. They are too tied to their established business cash cow. What will happen however is that Google will see this as throwing down the gauntlet, and they will accellerate their pace for world domination.

In a nutshell, here's how I see things shaping up. Google will launch their nationwide Wi-Fi service that will be free, mostly secure, high-speed internet for everyone. This will be followed by a huge surge in advertising revenue, anticipating the expansion of their market. Microsoft will launch something that is vaguely the same, several months to a year later. Then Apple will release Mac Mini's with Intel CPUs first. This will prompt many PC users to buy a mini just so that they can get their hands on OS X for intel, which will by some amazing feat be cracked at launch to run on any PC. This will do two things for Apple. The first is that it will undermine sales of Windows Vista, second it will increase their Mac sales numbers because they will be moving product. Google will follow with more business oriented applications based entirely on the web, using their desktop application as a vehicle. They will start building widgets for the macintosh that mirror those available through the dashboard. This dual-attack on Microsoft will prove to be too much. Microsoft will remain around, constantly behind Google and Apple and will end up like Sun supplying products to the top 1% of the market while enjoying none of the fame of Google and Apple. Apple will be back where it should have been all along; as the dominant computer manufacturer. Microsoft will remain a close second, but they will continue to slip away until they perform another reorganization.

That is the future. Put it in your pocket right next to your iPod nano!


Ridiculous Microsoft Hatred

Posted: December 31st, 1969 | Author: | Filed under: Microsoft, Uncategorized | Tags: | No Comments »

Ridiculous Microsoft Hatred

Picture of Irv Owens Web DeveloperEither the people who are asking for Microsoft to scrap Windows Vista are super-elite programmers who are completely beyond reproach, or they have never written a stitch of code in their life. How could anyone ask a software company to guarantee that their software will not contain any bad code? All developers, no matter what type of code you are writing, know that code is buggy. There is no way to completely debug something as complex and backward compatiable as a Microsoft operating system.

To come to Microsoft's defense is strange for me, but one has to realize that they did a bang-up job in getting Service Pack 2 out for Windows XP when they found a problem. They managed to fix the most egregious holes without completely breaking the operating system. They spent more money in that one free upgrade than many software companies spend in research and development in an entire year. I'm sure they are working overtime to make sure that Vista is as safe and stable as an operating system can be. Especially when your users try to install ten year old software, and buy equipment from obscure overseas vendors who have no quality control for their drivers or hardware, and then complain about their computers being unstable.

I won't be buying Vista because I'm not a windows user, but if I do actually switch back, I'll want Vista because it will be the best Microsoft can offer. It sounds like some people just hate Microsoft because they exist.


What Does Google Want With Weak AOL?

Posted: December 31st, 1969 | Author: | Filed under: Companies, Google, Microsoft, Sun Microsystems, Uncategorized | Tags: , , | No Comments »

What Does Google Want With Weak AOL?

Picture of Irv Owens Web DeveloperI'm sorry, but Google buying AOL would be a huge waste of money. First off AOL has nothing that Google doesn't have, and buying it to compete with Microsoft would be stupid. The analysts still don't get it, Google isn't afraid of Microsoft, or anyone for that matter, nor should they be. They are the 500lb gorilla of search. You could take MSN search, multiply it by two, add AOL search, then add the traffic of all the other search engines sans Yahoo and it wouldn't add up to half of Google's search traffic.

The reason Time Warner is of course considering selling AOL to Microsoft is because it is lame. There are only two good things that have come out of AOL in the last decade. The first is AIM, the second is Winamp which does indeed whip the llama's ass. Still, the success of Winamp has not lead to a decent music service, and AIM has not lead to anything except a great platform with an annoying client. They just launched an email service for non-AOL members a little over 6 months ago. They are cash rich and bloated.

For that matter, two sagging fat companies like Microsoft and AOL does not a Google killer make. Why can't they see this? If they read more Sun Tsu – The Art of War, which should still be required reading for any executive in corporate America. Everyone needs to write off broad-based search. Google has won, there is no catching them. Instead they should focus on what they do that Google doesn't in an effort to contain them to search. By trying to follow them in whatever they do, they are following their plan. That is one of the over-riding concepts to the Art of War, if your enemy is larger and more powerful than you are, you have to annoy them into making a mistake. Having them follow you all over creation will weaken them, and allow you to destroy them at home. In this instance Microsoft will follow Google on everything they try to do, while taking their focus more and more off their operating system only for Google to release the Goffice and the GoogleOS. Effectively destroying Microsoft. What Microsoft should do is focus on making Office more available on the web, meaning web based Word, Excel, and PowerPoint for enterprises. They should be focusing on making Vista more than Windows XP service pack 3, it should be robust and provide new and amazing features.

AOL should focus on getting its large base of rural customers onto broadband even if it means losing money. That is the only way to push in the TV over IP that the TimeWarner partnership was supposed to bring. The fact that the majority of their users are on dial-up should signal a problem for them, in addition to the growing impatience of their parent corporation. If they weren't so fat, they would wake up and realize they need to do something right now other than looking for another sugar daddy to keep them providing the same stale services they have been serving up for the past decade.

Other than Yahoo, no one has been able to change their business model to fit Google. Obviously both of them have been reading the abovementioned book. They are playing each other perfectly. Watch that space as the battle between Yahoo and Google will be the future of computing. Short of a miracle of clarity, which Microsoft is capable of, they are going to go the way of IBM. Rich, but not important to the cutting edge of information technology.


SQL Server 2000 vs MySQL

Posted: December 31st, 1969 | Author: | Filed under: Microsoft, Uncategorized | Tags: | No Comments »

SQL Server 2000 vs MySQL

Picture of Irv Owens Web DeveloperOne of the funny things about comparing these two systems is that they are both medium sized DBMS systems. Neither is really suited to huge enterprise level solutions, although both companies will say that they are ready for them. I have not implemented any system on Oracle, Borland, or DB 2 so I can't truly speak for their performance on an official level. On an unofficial level, however I can say that I have noticed that they are significantly slower with lighter load. With heavy load, however they perform much better with Microsoft SQL Server 2000 and MySQL falling off quickly. This is evident simply when taking a look at hosting companies who provide either SQL Server 2000 or MySQL, in a local installation, these systems are blazing, returning results for queries often in under 100 ms. In production however, when using shared hosting most of my queries take anywhere from 250 ms to regularly hitting 2000 ms. Both of which put me outside the desired window. When using Oracle locally, performance hovers around 100 ms. When using an Oracle based system in production, performance still seems to hover around 100 ms. I have heard and witnessed to some degree that Sybase feels the fastest, which each subsequent iteration of a query moving faster. Sybase is probably the smartest DB as far as adapting to use, while Oracle seems to be the strongest.

Frequently however for most web applications, SQL Server 2000 or MySQL seem to be just fine. Between the two of them, after working with SQL Server for some time, I have come to really love the enterprise manager, with it's associated tools, the Query Analyzer and the Profiler, however when working with full-text indexes, MySQL wins hands down. As far as performance is concerned, from what I have seen MySQL is faster with a moderate load. SQL Server seems better with heavier loads, and appears to have better caching. As far as that goes, it has MySQL flat out beat. MySQL could do a better job of using the cache for similar queries, for example if I were to use SQL like this:

SELECT id, username, password, lastAccess
FROM testDB

It would return that dataset and hold the dataset in memory. But, it wouldn't be smart enough to realize that this query is just accessing a subset of the one above:

SELECT id, lastAccess
FROM testDB
WHERE id < 100

From what I have seen, SQL Server does a better job realizing that those two SQL snippets are utilizing the same recordset. MySQL would treat the last query as a completely new query and go to the disk to get the data. This may improve some by using MySQL's enterprise version, Max DB, or in version 5, but from what I have seen in the betas, it doesn't seem much better.

But with the ease of full-text searching in MySQL, I will almost always prefer it to SQL Server 2000 until Microsoft starts using a built-in full-text searching solution.

Look at this:

SELECT id, field1, MATCH (field1,field2) AGAINST
('text to be searched') AS score
FROM table WHERE MATCH (table1,table2) AGAINST
('text to be searched')

That is so simple and easy to understand. Creating the index is just as easy. Creating a full-text index using SQL Server is fairly complicated, primarily as I indicated above because you have to deal with an external application. The full-text searching rules built into MySQL such as the 50% rule as you can see in effect by visiting Owens Performance Blog Search makes querying an effort for the user, but much of that can be handled by decent business logic.

Ultimately it comes down to performance for price. When compared this way, MySQL wins, it is free. Still, MS SQL Server 2000 is a good value, given it's strong performance, robustness, and quality of tools. I will never hate using it for jobs. Still, if given my preference I would choose MySQL.


The Windowing Graphical User Interface

Posted: December 31st, 1969 | Author: | Filed under: Apple, Google, Microsoft, Uncategorized | Tags: , , | No Comments »

The Windowing Graphical User Interface

Picture of Irv Owens Web DeveloperNaturally if you ask most people about the biggest recent advance in the history of desktop computing, most of them would say that the windowing graphical user interface is it. While the GUI is definately a significant leap from the textual interface used in all Unix variants, I am somewhat dismayed by the lack of progression in interface technology.

It seems that the pioneers of these technologies can only think in Windows. It would be interesting if Apple and Microsoft, instead of focusing on improving their GUIs, were doing work in earnest to come up with another method of communicating with computers. Naturally, artificial intelligence and excellent speech recognition come to mind. With the incredible number crunching performance of current desktop computers, there is no reason for there to be no decent speech-to-text conversion software. Dragon is really good, but as far as controlling the OS, browsing, and writing software, it is far from ideal. For example, the computer should be smart enough that when developing in Dreamweaver I could say cffile tag. It should then put cffile tags with the cursor inside of the right less than symbol for me to add attributes. If the next command that I spoke was not an attribute of that tag, then the cursor should jump between the tags and begin copying in text. These things should be built into the operating system as APIs. But perhaps I shouldn't be looking to Apple and Microsoft to do these things. Maybe they are too focused on directly competing and keeping up with each other to really innovate. Maybe the open source community can come up with something to knock them off.

I definately don't see the GUI taking a dive anytime soon, but for devices like cell-phones, there isn't a much better method of controlling them than talking to them. I should be able to have it read-off my incoming SMS messages, and I should be able to dictate a response back to the phone. Intel has some pretty powerful mobile processors debuting, and they could easily handle this sort of rudimentary voice recognition software. I should be able to browse the web from my phone where it could read off the contents of the pages that I found while I am driving or whatever. It only makes sense.

As far as search goes this makes URLs and DNS somewhat unnecessary. Obviously you should be able to say triple-w dot whatever dot com and have your computer browse there, but who would actually do that. Wouldn't you just go into Google and say blah dot com, or more likely blah? This has been something that has come up several times in blogs and in discussion boards. More and more Google is becoming the primary way people find websites. This is good in that accessing information is more straightforward for most people. It is bad in that all pages can not appear on Google. Google seems to realize this, and that is why the blogsearch site has been launched, to give more people a chance to be found. Without DNS, and without having to know the web address of the site you want, a combination of voice and Google's search technology can make files on a computer available through the voice browser as well as any assets on the world wide web. Wouldn't it be great to just, as you are walking out to work say as an afterthought, “I want for you to find the best price on this movie, download it, and forward it to my television.” When you get home from work, you could just say to your television, “start a movie.” That is where we need to be going. Having all computing centralized around having multiple desktop machines in a home is a dead end. Devices and home servers are the future, and the graphical user interface is not the interface for this future.


Windows Vista SuperFetch

Posted: December 31st, 1969 | Author: | Filed under: Apple, Microsoft, Uncategorized | Tags: , | No Comments »

Windows Vista SuperFetch

Picture of Irv Owens Web DeveloperMost PC users are plagued more and more by chronic slowdowns the older their PC gets. Some of this, probably most of this is due to spyware, but a vast majority of this is caused by software loading into the system tray. The tray is an area in which a program can run without having to have an interface, or appear in the taskbar. This was a great idea with the introduction of Windows 95, but in the era of many computer users running greater than 1 GB of RAM it seems un-necessary, or at least that it should be redesigned in Vista.

Microsoft is attempting to remedy the problem of slowness as well as acknowledge the larger amounts of RAM in currently shipping systems by building a technology called SuperFetch. Which is a fancy caching program designed to notice which software you use most frequently, and load that into memory at start-up. This can improve the percieved performance benefit to some users, but probably would end up being a technology that would get in the way of most power users. It would in a way take users back to the heady days of Windows 3.11 when we could choose how much system memory was to be dedicated to the applications, and how much was to be reserved for hard drive cache. This system would behave in much the same way, but instead of the user being in control, the system would adjust automatically.

In attemping to understand why the system tray is strange, and probably should be either redesigned or removed, it is a good idea to look at how Apple as well as Microsoft handle GUI-less applications. In Windows developers have the ability to create full applications that run in the background without disrupting the user, or appearing on their display. These programs are known as services. Probably the most widely used service is the IIS service. IIS, if you don't know already, is Microsoft's web server product. It is similar to Apache, except it has the ability to process scripts built in to the server. Whereas with Apache it is necessary to include modules for this functionality. As a result Apache is much more flexible, which could change if Microsoft is truly to build PHP support into IIS. But I digress. Some applications such as Apache for windows run as a service, and also include a system tray icon for management. In Mac OS X applications without a GUI typically either run without any notification to the user, or place a small icon in the upper right of the screen, such porgrams are fairly rare, since most applications on the Macintosh load themselves into memory at their first use and remain there. Since it is quite unnecessary to turn the Macintosh off, or to reboot most of the time, your programs always launch insanely quickly. It would be best for Microsoft to implement a similar type of system. This would minimize the overhead necessary for keeping frequently used programs in memory, and would be vastly less complex.

Another potential solution could be to automatically dump programs from the system tray if they aren't being actively manipulated by the user, or performing some operation on the system. This would do two things beneficial to the system. First, it would free system memory, handles, and other resources for use with software that is actually doing something. Second, it would discourage developers from using the system tray as a place to put their icons. To supplement this model, Microsoft should encourage developers to write services for Windows as opposed to tray software. The way it could work is that the first time this happened, the system would notify the user that it was closing the applications and ask them if they wanted to be notified in the future. That way the user would remain in control. On systems with less than 512 MB of RAM this simple system would pay back sometimes enormous benefits, reducing the idle footprint of the OS from around 400 MB on some systems back to a managable 256 MB. Microsoft is over-engineering this one, they should review the K.I.S.S. software development methodology.