Today is a good day to code

The Biggest Trick that Jeff Bezos Ever Pulled

Posted: April 19th, 2012 | Author: | Filed under: Amazon, Apple, Companies, Facebook, Google | Tags: , , , , , , , , | No Comments »

Boa Constrictor

For reasons unknown, it seems that the tech media completely fails to give Jeff Bezos and Amazon the recognition that they deserve.  I believe that this is due to a deliberate strategy executed by Amazon to quietly grab as much mind and market share as they can.  If they continue on their trajectory, they may become unassailable, in fact, they may be already.

There are blogs and podcasts called things like Apple Insider, This Week In Google, Mac Break Weekly, etc… I have yet to hear about any blogs or podcasts about what Amazon is doing week-in and week-out, but in many ways it is much more interesting.  Amazon now handles 1% of consumer internet traffic, pushing all of through its near ubiquitous compute cloud infrastructure.  They are rapidly and efficiently dismantling existing retail.  Amazon is probably on their way to completely owning web commerce.  Amazon has mass amounts of data on what people have, want, and will want based on what they own and buy.  Through their mobile applications they are gathering pricing signals from competitors so that they can use their own cluster computing prowess to spot change pricing.

What is shocking about this is, despite their proficiency, no one discusses how absurdly dominant Amazon has become.  Everyone just treats Amazon running all internet commerce and large swaths of its infrastructure as “the way it is.”  Amazon is more a force of nature at this point than a company.

It isn’t just the tech media that doesn’t give them the credit they deserve, major tech companies aren’t either.  Google and Apple seem ready to laugh off the Kindle Fire while Amazon soaks up more signals.  Microsoft doesn’t even try to match them.  Google’s commerce efforts look half-baked compared to what Amazon does, and they show no signs of trying to do better.

It is absurd to think that with the bitter rivalries we constantly hear about between Apple and Google, Microsoft and Google, Microsoft and Apple, etc… that someone would start a podcast about Amazon.  Fifty years from now technology changes will have toppled Apple, Google, Facebook, and Microsoft, but I’d bet that Amazon would still be around.

Jeff Bezos and his company wield algorithms and data more effectively than anyone else in the industry, despite all the credit we give Google for search.  Their suggestion and comment filtering algorithms are bar none, the best around.  Amazon is integrated into the fabric of our lives and that is something that no other tech company has done to that level.

Amazon will keep doing what Amazon does best, being ruthless, being efficient, executing better than anyone else, and staying ahead of the curve.  As long as we keep ignoring them they are doing their job.  The greatest trick Amazon ever pulled was convincing the world that they didn’t exist.  They have convinced the world that they are just retail.


Google’s Vision of the Future is Correct… But They May Not Be The Ones Who Implement It

Posted: January 8th, 2011 | Author: | Filed under: Companies, Google, Lifestyle | Tags: , , , , , , , , | No Comments »

On a drive from Colorado to Las Vegas this past week my daughter and my son were in the back seat of our car using my daughter’s netbook, she has recently turned 7 years old so I bought her a netbook and I am starting to teach her how to code.  My son wanted my daughter to change the video that they were watching and she began to explain how the internet works to him.

She told him that all of her stuff was on the internet ( emphasis mine ) and that the movie that they were watching was the only one that was on her netbook, she explained how her computer was barely useful without the internet, that the internet came from the sky and her computer needed to have a clear view of the sky to receive the internet.  In addition she said that since we were in the car and the roof was obscuring said view that they couldn’t get the internet, and couldn’t change the movie.

Listening to this conversation gave me a bit of pause as I realized that to my children, the internet is an etherial cloud that is always around them.  To me it is a mess of wires, switches and routers with an endpoint that has limited wireless capabilities.  When I thought through it, however, I realized that my kids had never seen a time when someone had to plug in their computer to get to the web.  Plugging in an ethernet cable is as old school as dial-up.

Once that sunk in, I understood that the Cr-48, Google’s Chrome OS netbook is a step in the right direction, and while I am very enthusiastic about several aspects of Google, and in all fairness others’ vision of a web based future, I do not feel that the current approach will work.

A centralized system where all of users’ data lives, and all communications go through is not an architecturally sound approach.  As the number of devices that each user has goes up, the amount, size and types of connections is going to stress the servers exponentially.

It is already incredibly difficult to keep servers running at internet scale, we need entire redundant data centers to keep even small and simple web scale endeavors running.  When you take a step back you realize that a system like Facebook is barely working, it takes constant vigilance and touching to keep it running.  It isn’t like a body where each additional bit adds structural soundness to the overall system, instead each additional bit makes the system more unwieldy and pushes it closer to breaking.

Google is another example of a system that is near the breaking point, obviously they are struggling to keep their physical plant serving their users, and like Facebook they are so clever that they have always been able to meet each challenge and keep it running to date, but looking at the economics of it, the only reason this approach has been endorsed is because of how wildly lucrative mining usage patterns and the data generated by users has been.

I don’t think this will continue to be the case as the web reaches ever larger and larger groups of people. I don’t think any particular centralized infrastructure can scale to every person on the globe, with each individual generating and sharing petabytes of data each year, which is where we are going.

From a security and annoyance perspective, spam, malware, and spyware is going to be an ever increasing, and more dangerous threat.  With so much data centralized in so few companies with such targeted reach, it is pretty easy to send viruses to specific people, or to gain access to specific individuals’ data.  If an advertising company can use a platform to show an ad to you, why can’t a hacker or virus writer?

The other problem that is currently affecting Google severely, with Facebook next is content spam.  It is those parking pages that you come across when you mistype something in Google.  Google should have removed these pages ages ago, but their policy allows for them to exist.  Look at all of the stack overflow clones out there, they add no real value for themselves except for delivering Google adsense off of creative commons content.  What is annoying is that because of the ads, they take forever to load.  Using a search engine like Duck Duck Go things are better, but this is likely only because it is still small.  DDG also says that it will not track its users, that is awesome, but how long will that last?

It is possible for a singly altruistic person to algorithmically remove the crap from the web in their search engine, but eventually it seems that everyone bows to commercial pressure and lets it in in one fashion or another.

Concentrating all of the advertising, content aggregation, and the content in a couple of places seems nearsighted as well.  The best way to make data robust is to distribute it, making Facebook the only place where you keep your pictures, or Google, or Apple for that matter is probably a bad idea, maybe it makes sense to use all three, but that is a nuisance, and these companies are not likely to ever really cooperate.

It seems to me that something more akin to diaspora, with a little bit of Google wave, XMPP, the iTunes App Store, and BitTorrent is a better approach.  Simply, content needs to be pushed out to the edges with small private clouds that are federated.

This destroys most of the value concentrated by the incumbents based on advertising, but creates the opportunity for the free market to bring its forces to bear on the web.  If a particular user has content that is valuable, they can make it available for a fee, as long as a directory service can be created that allows people to find that content, and the ACLs for that content exist on, and are under the control of the creator, that individual’s creation can not be stolen.

Once the web is truly pervasive then this sort of system can be built, it will, however, require new runtimes, new languages, protocols, and operating systems.  This approach is so disruptive that none of the existing large internet companies are likely to pursue it.  I intend to work on it, but I’m so busy that it is difficult.  Fortunately, however my current endeavor is has aspects that are helping me build skills that will be useful for this later, such as the Beam/Erlang/OTP VM.

The benefit is to individuals more than it is to companies, it is similar to the concept of a decentralized power grid.  Each node is a generator and self sufficient and the system is nearly impossible to destroy as long as there is more than one node.


Steven Wolfram’s Computation Knowledge Engine

Posted: March 9th, 2009 | Author: | Filed under: artificial intelligence | Tags: , , , , | 2 Comments »

Ars Technica has no faith.  They are already saying that Wolfram’s knowledge engine will fail based, I’d imagine on the complete and utter disaster that Cuil and other would be google challengers have been.  Here’s why I think that the computation knowledge engine can be a success.

First of all, its Stephen Wolfram, who truly shouldn’t be underestimated.  He is also not trying to say that it can cure cancer, really he isn’t saying what it can do, or what its ultimate goal is.  Except to say that it is going to answer simple questions.  I don’t understand why this is impossible.  Technology is clearly accelerating at a near exponential rate.  The same improvement in technology and science between 1997 and 2000, was probably accomplished by June 2002, and so on.  If that is to be accepted, then you have to believe that at some point soon we should get to an intelligent system that can answer a simple question like what color is the sky.  Not by looking it up in a database, but by actually reasoning out the answer.

I think that Ars isn’t giving these guys enough credit.  I can’t wait to see what they have cooked up.


A Response To: “The CSS Corner: Using Filters In IE8”

Posted: February 23rd, 2009 | Author: | Filed under: Companies, CSS, Microsoft | Tags: , , , , , , , | No Comments »

Well, the IE team has posted an excuse for why IE 8 will not handle widely used CSS 3 extensions.  The reason, its hard, and it was a stretch goal.  Instead we are left with a slightly more standard implementation of the filter css attribute, -ms-filter, as opposed to filter.

Furthermore, the IE team claims that they are doing this so that “web authors do not have to rewrite their stylesheets”. 

OK.  Let’s look at this objectively.  It is indeed hard, building a web browser from scratch is no joke.  I have tried several times, and I am still trying to build a web browser.  I have tried this in C++, Java, even Ruby.  It is always hard.  Most of the difficulty comes from trying to render pages that aren’t formatted properly.  Right or wrong, it is how the web is currently built.  However; I have a radical solution, I apologize in advance for the shout, *USE WEBKIT*.  Why is this a problem?  It would be easy to use the standard msie7.dll or whatever for pages that need the *broken* button in IE 8.  Then use a new WebKit based render mswebkit.dll for pages that are standards compliant, or not using that strange IE 7 tag.  If Multiple IEs works, this would be completely possible.

Let’s take a quick look at why Microsoft might not want to do this.  Google uses WebKit, and Apple uses WebKit.  As far as the technical difficulty in this, many lesser organizations have implemented a WebKit based browser from the webkit source without hiring a million developers.  I think that an organization like Microsoft should be able to handle building a browser using or based on WebKit within a few months.  I wish Microsoft could occasionally be more like Google and throw out the product managers and just build what the world wants. I don’t understand why they can’t consider this.

Now about the sentence, so that “web authors do not have to rewrite their stylesheets.” I am a web author, and I will not rewrite my stylesheet.  IE users, I am sorry, you will just have to live with a broken layout.  I do not have the time or the interest in rewriting my cool, cutting edge web applications to work with 10 year old technology.  They said this stuff was written originally in IE 4.  It came out for the PC originally in 1997!  Come on, advance!  I will not write anything for IE.  I will make sure it functions and none of the tasks that a user would do in my web applications are blocked, but I am not going to try to make it have rounded rects, or opacity, if IE doesn’t support web standards.  That sentence alone indicates Microsoft’s hubris, note the “have.”  If it were mozilla, they would say so that web authors don’t want to rewrite their stylesheets, not that they would ever have that problem.  Microsoft is still pretending that IE is relevant as far as developer mindshare.  

Microsoft does some amazing things, but as far as the web is concerned, it is pretty much off my radar.  Users, please, please upgrade your browser to Chrome, Firefox, or Safari.


NBC Still Doesn’t Get It

Posted: February 23rd, 2009 | Author: | Filed under: Hulu, Media, NBC | Tags: , , , , , , , | No Comments »

I was wondering how long it would be before NBC started to bite the hand that feeds it with Hulu.  I still think that eventually NBC will completely kill it, with help from Comcast.

First of all, I was amazed that Hulu was allowed to exist, and once it did, I started to count the days until it was killed.  Not that I don’t love Hulu, I do.  I think that Hulu, Joost, and other sites that blend big media programs with net content are awesome.  It allows me to watch television again.

For several years, I didn’t care what was on TV, I didn’t really watch TV.  All I did was watch Netflix and YouTube, when I wanted to consume video.  I know that I am not the only one that doesn’t have time to sit down and watch my favorite TV shows in primetime.  Not to mention that I don’t even know when most of  the shows I like air.  Before Hulu, and Joost, I didn’t even care, I just stopped watching.

What I can’t understand is why NBC seems to not understand that no one wants to watch TV at preset times anymore.  Not to mention that if I am going to be advertised to, I don’t want to pay for the “privilege” to watch shows at a time of my own choosing.

As far as Comcast is concerned, I can’t believe that it is a mistake that everytime I watch a show on Hulu, now that Comcast has run my local ISP out of business and bought it at bargain basement prices, all I see are US Military and Comcast ads.  Comcast, I am not going to pay more for your cable package, I am not going to pay for your “on demand,” and as soon as there is an alternative that can provide some semblance of decent speed, I am not going to pay for your compromised internet.  They claim that they are packet prioritizing to ensure network integrity, but Hulu is much slower than it used to be, even while doing speed test show something like 14MB burst downloads.  That doesn’t make sense.  I went from 4.5 MB down to 6 with 14 burst and it is slower?  I have all this speed, but I can’t use it for anything… Fail…

By removing its content from Hulu affiliate sites, NBC is proving that they don’t get that consumers want to consume video in the way they want to consume it.  I am seriously considering just buying this stuff from iTunes and being done with it.  CBS gets it, and are doing a good job, the only problem is that they just don’t have the content.

I think they must believe that if people can’t get the NBC content anywhere except for the TV, that they will just sit in front of the Boob Tube and watch it, but they are wrong.  People will stop knowing about the shows, and will begin to look for alternatives like video games, or short, indie programs that will be readily available on online only networks link ON Networks, Revision 3, etc…  I already consume way more video podcasts than TV shows anyway, it wouldn’t take much for me to just drop TV entirely.  What would that do to NBC’s ad revenues?  Comcast needs to get a clue and realize that they are a dumb pipe, and they need to forget about the Coax business and get with the TV over IP business.  If they want to compete, how about creating their own quality content to get the ad business instead of crapifying my internet connection, and spamming me to try to get me to embrace their dying business model.

NBC will never get it.  They need to just go away.  I like a few of their programs, like Battlestar and Heroes, but I am not sure that it is worth the effort, especially with iTunes and Netflix around.  If they take that away, well I just don’t know, perhaps I’ll have to write and produce my own Sci-Fi stories.

NBC (Hulu) Removes content from Boxee


What is this Y!Q stuff?

Posted: December 31st, 1969 | Author: | Filed under: JavaScript, Programming, Uncategorized | Tags: , , | No Comments »

What is this Y!Q stuff?

Picture of Irv Owens Web DeveloperYou may have noticed all of the Y!Q links everywhere on my site. It is a new beta product from Yahoo! that allows people to perform web searches constrained by selected content from the page they are searching from. The content that goes to Yahoo! is selected by the publisher and targeted to return even more relevant results than would be possible going directly to the search engine.

When a user visits a search engine, the system has no background about the person to constrain their results so it makes it difficult to perform a search, for example if I knew someone were from Washington State, and they typed in the word apple, then I could assume they might be looking for apple wholesalers, or apple growers, or apple trees. If someone from California searched for the word apple, I might return the company. This is possible if you know something about the person who is searching, which is why personalized search has been receiving more focus of late.

I prefer the context based approach, because then I don't have to provide any personal information for the search engine to give me what I want. It would know just by the content of the web page that I am searching from.

I'll be honing the coldfusion parsing scripts to give the best possible content to Yahoo! I'll be removing words that are less than four characters in length from the article, to get rid of parts of words and words that carry little meaning like 'the.' I hope to have the best, most relevant results, because Yahoo! is offering $5,000 in their contest. Of course there had to be some motive for me to use this beta program!

I suppose that in its final iteration, Yahoo! will create some type of advertising revenue sharing model similar to Google's adwords. They seem to be hoping that it will generate more clicks because of its usefulness to the user. It is still kind of buggy, for example in all browsers other than Safari 2.0 a semi-transparent overlay pops up when the Y!Q link is pressed, on Safari, it takes you to Yahoo's relevant results page. Hopefully they will fix this soon, I'm pretty sure it has something to do with the changes Apple made to Safari's javascript processing engine. Also, since I am trying to automate this, sometimes a character gets into the string, and causes the Y!Q to return something not valid. I hope this will help with your searching.


JoostBook – Joost to Facebook Interface Widget

Posted: December 31st, 1969 | Author: | Filed under: java, JavaScript, Programming, Uncategorized | Tags: , , , | No Comments »

JoostBook – Joost to Facebook Interface Widget

Picture of IrvinSince I'm in love with Joost, I have been thinking about good applications that I could write for the platform. Before I get into talking about the widget / plugin, let me just say that the experience I have had with communicating with the Joost engineers, through their joost-dev google group, as well as them allowing early access to their SDK, has been outstanding. I have rarely come across a more open and generous group. Typically, the SDK guardians are very selfish about discussing future features, and are usually quite arrogant about the possibility of a developer finding an undiscovered bug. None of this has been the case with the Joost SDK staff.

If you don't want to read the details about how I built it, and you just want to use it, you can get it here: JoostBook: Joost / Facebook Interface. You will need Joost, and a facebook account to get started.

Now, about the widget. Firstly, the installation is a little wierd because of the level of control facebook insists on. In order to use the SDK, you have to authenticate, if an unauthenticated request is made, the response is with the facebook login page. This makes for some unique error catching conditions.

Secondly, we web developers often take for granted that the DOM will have a listener attached to it, and will automatically refresh if anything in the DOM changes. Well, I know that the Joost engineers are working on it, but it doesn't refresh, and therefore, while you can create new XHTML elements, as well as modify the ones that are there with JavaScript. You are best off currently just hardcoding all of your objects up-front, and changing their contents. Also, injecting XHTML using innerHTML doesn't really work so well currently either. I'd suspect that much of this is because there is a bridge between the 2D world of XULRunner / Mozilla, and the 3D world of the Joost interface. I'm sure there is a lot of complexity between the two.

So basically, once you have downloaded Joost, and installed the plugin, the first thing I had to do was check for if you are logged in, if you aren't logged in, it has to show you the facebook login page in an iframe so that the XULRunner browser can be cookied. After that, the widget should work like one would expect. You may have to log in alot, and if you aren't logged in, obviously the application can't update the JoostBook facebook application.

Writing the Joost plugin was the easy part, getting the facebook stuff to work was the hard part. Most of it was because the error handling is terrible. Since facebook doesn't allow you to see the 500 errors that your server is throwing, and it doesn't log it, you have to find other ways to check to see if your server is behaving properly. I spent a lot of time in my logs checking for errors.

The install process is a little wierd too, for example, in Firefox 2.0.0.8 on Windows XP, when I clicked on the Joda file linked in the page, it tried to open it as if it were some kind of markup file, obviously the joda looked like garbage, I had to right click and save. Perhaps if I had used a joost:// link it would have worked OK, but I think more research is in order. I didn't really try it in IE because most of the readers of this blog use Firefox, but it should work the same way.

Then having to install the application in facebook can be a little difficult as well. Well, the installation isn't difficult, its the concept that you have to install two applications that work together that is hard. At least there is no particular order in which you need to install them, worst case whenever you run the JoostBook plugin in Joost, it'll show you the facebook login page all the time.

At any rate, it was a fun experience, and I still think the guys at Joost are on to something. I'm slightly less psyched about the facebook platform, but I'm still excited about it.


New Internet Explorer 7 to Allow More Customization

Posted: December 31st, 1969 | Author: | Filed under: Google, JavaScript, Microsoft, Programming, Uncategorized | Tags: , , , , | No Comments »

New Internet Explorer 7 to Allow More Customization

Picture of Irv Owens Web DeveloperI love the ability I have to add more functionality to Firefox. Right now I have the web developer tools so that I can check out a page's stylesheets, javascript, block level elements, etc… I have the IP tool installed so that I can see the IP address of the site that I am currently visiting. I have the Gmail notifier and the PageRank tool all incorporated in my browser, most of which modifies the status bar at the bottom of the browser and is completely innocuous. Internet Explorer has always supported plug-ins, but they were limited in their ability to change the user's browsing experience, relegating them to toolbars and the like. That is about to change.

Similar to the new Google dashboard Internet Explorer will allow small web applications to be installed in the browser, it will allow a user to modify the webpages they are viewing, create a new download manager using the .net languages, really the implications seem to be pretty huge. There is just one problem. Security.

One of my biggest fears with a heavily extensible Internet Explorer is that people will be able to use it to compromise the security of the operating system. We have heard time and time again that in Longhorn, ahem, Vista, users will be able to run Internet Explorer 7 in a sandbox of sorts, or a least privileged user account, preventing would be hackers from compromising the system. That is great for Vista, but what about on Windows XP Service Pack 2? Don't get me wrong, I think Microsoft has done as much as can be expected of anyone when patching a completely insecure OS, and they did it in record time too. Still, there have been plenty of bulletins regarding more compromises and exploits in Windows XP SP2, some regarding Internet Explorer. If you give individuals the ability to distribute code that a user can install, it is possible, by definition to compromise that user's system. I'm sure that Microsoft would be quick to point out that then it isn't their fault that someone installed software that allowed hackers to have their way with all their files, but at the same time it is very easy to misrepresent a piece of software to a computer novice who is using Windows. Just look at how far Gator / Claria has gotten sneaking software onto systems. I think that while having the ability to customize one's web browser is cool, Microsoft should consider passing on this potential nightmare. It is sort of reminiscent of Microsoft's touting of Active X and how it was going to obliterate the line between desktop software and internet applications and change the way we all use our computers. Well, it changed the way we all use our computers, we all need anti-virus / spyware / malware filters that sniff out those Active X controls and disable them. Most of us, those in the know, if we have to use windows, turn the Active X controls off altogether.

I think that Microsoft should really not include this feature, and I mean even for toolbars unless they are reviewed by Microsoft and signed by Microsoft. That is the only way to be sure users aren't getting malware. If the plug-in isn't signed by Microsoft then the OS should refuse to install it. It should be that simple. Of course it makes developing for IE that much more difficult, but Microsoft could release a developer's version of IE that was open source so that the plug-in verification could be disabled to allow all plug-ins to be installed. Everyone in the software business knows that features move boxes, but Microsoft should keep their eyes on the prize of security. They really need to get their reputation back, and integrating more sketchy features in not the best way to do this.

IE Extensibility – From the IE blog


Big Iron (Mainframes) and the World of Tomorrow

Posted: December 31st, 1969 | Author: | Filed under: JavaScript, Programming, Uncategorized | Tags: , , | No Comments »

Big Iron (Mainframes) and the World of Tomorrow

Picture of Irv Owens Web DeveloperThere was an article in CNET yesterday espousing the need for developers to pick up mainframe development, and schools reinstating their mainframe classes. While I don't think anyone should waste their time learning about a mostly dead technology, it makes sense to learn from the applications developed on mainframes and take the lessons with a grain of salt.

Right now I am working on converting a legacy mainframe application that was implemented in the 1970's into a web application. The real issues are stemming from the current business process with that mainframe. The database, probably some RDBMS variant, is normalized in such a way that it makes enough sense to keep that structure rather than try to re-invent the wheel. What has been suprising is that it also makes sense to maintain most of the data presentation layer.

The people who use the current system get a ton of data from a very small amount of screen real-estate. The mainframe systems were usually text based, and limited in the number of characters that could be stored in a field, and therefore displayed. Much of the business process that resulted from these limitations has evolved around using codes and cheat sheets to figure out what the codes mean. This also has the effect of shielding somewhat sensitive information from outsiders and customers. The use of codes as a shorthand for more detailed information also has the effect of being able to transfer a large amount of knowledge in a very short time for experienced users. Similar to the way we use compression to zip a text-file into a much smaller file for translation later. When a user inputs the code, they are compressing their idea into a few characters that the user on the other end can understand.

I have been more fortunate than most, because I have access to one of the original architects of the system, and I believe that having an understanding of the business environment and the system architecture is more important than knowing the actual code. Most people looking to hire individuals who understand the mainframe are really looking for people to dis-assemble their applications and rebuild them as web applications.

I do intend to maintain the look of the existing mainframe screens, but intend to replace the current cheat-sheets with simple hover javascript events to display descriptions of what the codes mean. I like this approach of blending the old with the new since it will create a sustainable bridge between the legacy users and incoming users who may not have had the same experience.

The article in CNET further implies that mainframes still sport some advantages over server based applications. That may be true to a degree for deployed desktop applications, but maiframes have no advantage when it comes to web applications. Still, people who know COBOL, FORTRAN, and other low level languages can command a premium for their technical knowledge in the few shops who feel that maintaining these mainframe applications and hardware are better for some reason than replacing them, but it is only a matter of time until these shops agree that paying an ever increasing amount for maintenance and upgrades is more expensive than bringing someone onboard to convert the application to the web. Therefore I see no future in the mainframe, however some great applications were developed for them, and the applications that are still running on them were probably more robust than average.

Much of the methodology I tend to follow when constructing a database or organizing code were implemented for the first time on big iron, so I actually feel priviliged to be able to work with it. Its almost like looking into a time machine where you can see and feel the environment of the past which, even though it may seem the same, is vastly different than the business climate today.

Learn COBOL today!

What is a mainframe anyway?


Internet Explorer 6 Hangs with Multiple Connections

Posted: December 31st, 1969 | Author: | Filed under: ColdFusion, JavaScript, Programming, Uncategorized | Tags: , , , | 6 Comments »

Internet Explorer 6 Hangs with Multiple Connections

At work we are using the demis map server, which by itself is an incredible application. We had built a flash based client as our application to allow people to see images overlaid on top of the vector data digested by the map server. One of the issues we had observed with the application was that it tended to hang, or stop responding when a user would ask for many images to be shown on top of the vector map, then they navigated away from the current screen. Now, since I had seen the code and it was a mess with JavaScript setting cookies that ColdFusion was supposed to read and pass to flash, and images for checkboxes, I automatically suspected the code. However, the problem was deeper than that.

The code needs to be rewritten no doubt, there are many more efficiencies to be had, but that didn’t explain the hang. I combed over the server, watching response while a user was using the application. The map server stresses the machine, because it needs a ton of I/O and it would spike the CPU frequently, but no processes went to 99% CPU utilization, and the server seemed to respond to other clients even when one of them was hung up. It was pretty clear then that the problem wasn’t with the server. To take this logic a little further, we built a load test using wget and saving the result to a file. We looped over the calls as fast as we could and we never caused the map server to hang. It performed as expected.

The next logical step was to look at the possibility of corrupt files. We did notice that we could get the map server to crash when we fed it corrupt files, but we found no eveidence that the files that we were using in production were corrupt in any way. At this point we were plenty dejected, because we had spent something like 35 hours over a couple days working on this problem and we had nothing. We performed a new ColdFusion install on a different server, we built a server with better hardware, we reinstalled the map server application multiple times, nothing seemed to affect it. We even improved the network bandwidth available to the client, still nothing. At that point I was down to either it was the code, or it was the client.

To test this theory I commented out all of the flash calls on every page and went through the application to try to cause the system to hang. I couldn’t do it, so I had effectively limited the possible cause to the Flash movie. I started to go through what the Flash movie was doing, and what could cause it to fail. The demis people told us that they had seen hangs when the map server wasn’t responding, and the Flash player was parsing XML. This lead me to try the application in Firefox, and lo and behold, it never hung up. It worked like a charm. The only problem was that our client was set on Microsoft Internet Explorer

I started about the arduous task of removing all XML parsing from the Flash code, then I tried it and it still hung. I was truly disappointed, but I rethought what was happening with the XML. It was making server calls, I realized that I could have up to 8 consecutive connections going on. At the time I thought it was nothing, but then I started trying to find out what was different between Internet Explorer and Firefox. I happened upon an article on MSDN about a known bug that Internet Explorer will hang for 5 minutes when there are 2 persistent connections to a server, and rich content is downloaded. I had found my culprit. It turns out that I had to add 2 keys to the registry. MaxConnectionsPerServer, and MaxConnectionsPer1_0Server. I set the latter to 8 and the former to 24, hexadecimal. The keys need to be DWORD keys.

That would allow 8 connections for HTTP 1.0 and 32 or so connections for HTTP 1.1. The HTTP 1.1 guidelines recommend that there only be 2 connections allowed, but if Firefox wasn’t adhering to it, why should I. I added the keys to HKEY_CURRENT_USER>Software>Microsoft>Windows>Current Version>Internet Settings and it worked like a charm. Everything was perfect. Talk about looking for a needle-in-a-haystack. I’m still amazed that I found it.

The purpose of this entry is so that no one has to go through the week that I just went through. Generally no software should be in front of the client before it is ready, but in this case we already had a client. Hopefully this will help anyone out there who is experiencing hangs in Internet Explorer. Darn Microsoft and not fixing bugs for almost 3 years!

*EDIT Make that 8 years, since IE 8 appears to still suffer from the same problem!*

Here are some helpful links that might be better at explaining than I am…

Wininet Connection Issue

IE Hang Issue