Today is a good day to code

Adding Machine Learning to Nc3 Bb4 Chess

Posted: September 29th, 2011 | Author: | Filed under: artificial intelligence, chess, JavaScript, nc3bb4, Programming | Tags: , , , , | No Comments »

While the garbochess engine is plenty strong used in the Nc3 Bb4 Chromebook chess game, I thought it would be interesting to look at adjusting the weighting mechanism by sucessful and unsuccessful outcomes.

The first thing I had to look at was how garbochess weights potential moves.  This took me into the super interesting world of bitboards.  A quick aside,  I have been working on mapreduce for the past few weeks, so looking at early methods of dealing with big data ( chess has an estimated ~ 10120 ) legal moves, in order to successfully evaluate all of the possible moves for a given position, plus all of the possible counters, weight them and choose the best possible move given criteria certainly qualifies as big data.

Interestingly, the approach wasn’t the hadoop approach, the hardware to use such brute force methods wasn’t available, instead early chess programmers tried to filter out undesirable moves, or obvious bad moves, moves that had no clear advantage, etc… What they ended up with was a pretty manageable set of moves for a circa 2011 computer.

The way garbochess considers moves, it looks at mobility for a given piece, control of the center, if a capture is possible, what the point differential for a trade would be, etc… and assigns a score for each possible legal move, it then runs through it repeatedly re-scoring the set relative to the available moves, removing the lowest scored moves, etc… eventually coming up with the best possible move.  What I wanted it to consider, was given that and the specific weights, mobility vs actual point value for a given piece, to use a markov chain for reinforcement learning to describe the entire process of a game and then rate each move with a weight enhancement upon endgame moves as being more important.  Every time the machine takes an action that leads to a success, the heavier the bias on the scoring for a given action.  Failure doesn’t automatically nullify the learning, but it definitely has an effect.

Where I got was a rudimentary implementation of this, as a bunch of housekeeping chores popped up, for example, as this is JavaScript, and all I really have is HTML5 storage, how do I store all of the moves while keeping the system responsive, no O(nn) or O(n2) lookups, what I wanted was to keep it O(n). Obviously that called for a HashMap of a sort, but the serialization / deserialization plus the key system were a challenge.  I didn’t want for it to cause too much overhead for the map / scoring system, as the bit twiddling is already pretty efficient, so I did the best that I could using the FEN + PGN.  The FEN is the state for the markov chain, since one could have a given PGN in many situations, and the weighting system could never be applied against the gravity of the situation.

I need to do more work on weighting changes based on how in trouble the machine is, whether they have an advantage or not, etc… But for a start with machine learning in chess, I think it works.


Why My Faith in HTML5 Has Been Reinstated ( Or How I Learned *again* to Love JavaScript )

Posted: July 13th, 2010 | Author: | Filed under: CSS, JavaScript, Programming | Tags: , , | 1 Comment »

Over the long weekend, I was lamenting over how many times I had to write the same routines in different languages… Objective-C, Java, PHP, etc… I realized that I have, and would have wasted tons of time writing native code, and how, really most of the functionality of the application can be handled with various features of HTML 5.  Originally I had been against this, but now that the iPhone has finally caught up and has a reasonable processor, I think that the HTML5 experience can be nearly as good as native.

The funny thing is that much of what is driving my decision is the desire to have my applications have data and interfaces that are available everywhere, mobile, web, desktop, etc… aka, the original promise of the web.  Using local caching, the JavaScript key/value store, and the database will help to allow me to provide a compelling disconnected use experience.  The code that I write will be useful across all of the platforms that I use.  The one caveat that I am making is that I need to focus on one browser, or approach, and for that WebKit based browsers seem to be the logical choice.

Now I realize that not all of my application concepts will be possible with HTML5 and JavaScript, however this recent thought experiment I realized that most of the features that I would have normally insisted needed to be done natively can be done with HTML5.  The biggest issue that I have run across is the 5 MB limit on database sizes in Mobile Safari.  I know it was there in iOS 3.x, I don’t think they have lifted this in the current OS.  The other issue is the forced UTF-16 encoding of characters.  While I understand this technically, it makes it difficult to store data larger than 2.5 MB on a device in the SQLite storage available to JavaScript.  The approach taken in desktop Safari, where you can ask the user to increase the available size if your database creation fails is a much better approach.

Another interesting pattern that I see emerging is that of utilizing HTML5 as the UI tier, and establishing the business logic and control structure behind a HTTP server that would expose additional native functionality to the HTML5 app.  The benefit here is that your local server implementation could match the remote server implementation, such that your client APIs could remain consistent.  This seems to me to be the best architecture for minimizing the work involved with porting solutions across platforms.  I absolutely love Cocoa and Objective-C, I enjoy the concepts behind the Android APIs, while despising Java’s syntax, and I think that .net is pretty cool as well, however when it comes to getting applications deployed to the maximum number of users in the leanest manner possible, I think it makes sense to leverage the web heavily up front, and then backfill the native implementations as necessary.


The Future of The Internet May Not be HTML5

Posted: May 7th, 2010 | Author: | Filed under: CSS, JavaScript, Programming | Tags: , , , | 1 Comment »

A few days ago, Joe Hewitt wrote a Twitter tirade about how web development has been stifled by the glacial nature of innovation at the w3c which caused a lot of reactions across the web, even some from Google.  Joe Hewitt also quit developing for the iPhone because of the App Store’s policies.  I have been thinking for a while about this, and it is what makes me want to write a web browser.

I agree with Joe Hewitt about Cocoa, the framework is awesome, for someone who has been fighting browsers for control of the UI for years, it is like a breath of fresh air.  However, this is true for any native rendering solution.  It is part of what makes it hard for me to go back to developing web applications.  I like JavaScript, but HTML and CSS not so much.  Talking it over with friends, and thinking about it further, I think that HTML5 may be exactly the wrong direction for us to be taking the web in.  Before you click away, this is not some “Apple should allow flash” argument.  I am thinking about this on a deeper level, about the types of applications that we are making today on the web, and some of the issues we are bumping our heads against.

The web has been good at delivering applications with zero-install, as well as presenting formatted documents.  The latter is what HTML was designed for.  XHTML was created because HTML was overstepping its bounds.  Currently I believe, even though native developers tend to look down on web developers, that developing a web application is one of the single most difficult challenges for modern development.  There are multiple languages that one needs to master, each with a different metaphor, syntax, and implementation.  There is overlap between the languages.  There are latency issues, networking issues, lack of resources on the client, these are all incredibly difficult things to deal with in general even with native implementations, the problems with the web exacerbate these issues.

At the heart of the problem is that with native frameworks and systems, you expect for everything to be different as you move across platforms, business stakeholders have an appreciation that moving from a Windows app to a Mac app will be hard, and they will staff up and provide the appropriate resources to do that.  In truth, coding a cross-browser application is no easier, however the issue with web development is that it appears to be easier.  HTML looks like a ubiquitous rendering language, it appears as though it would work exactly the same across the board, but it doesn’t and it likely never will.  JavaScript appears to be the same language across the browsers, but nothing could be further from the truth, it performs, and behaves differently in each.  CSS seems like it would be identical since all browsers comprehend the same syntax, but the same style can appear vastly different across the browsers, and in some not appear at all.

That is the current situation, and we are pressing further into the problem instead of dealing with it.  My opinion is that HTML should be present in browsers as a legacy rendering system.  What I would like to see is a raw vector based rendering engine, similar to the canvas tag, as the browser view that will give me, as the remote agent, the size of the window and the capabilities of the browser, such as audio, OpenGL, Sound, DirectX, etc…  It should also tell me what the origin of the screen is, UL, UR, LL, LR so that I can give the correct rendering directives.  It should send me a list of the languages that are supported, or are enabled by the user, binary, javascript, ruby, python, etc…  Then the browser should progressively download the code and execute it as it receives complete instructions.  The binary stuff could be jitted using LLVM + an appropriate front end, and cached.  The memory addresses could be sandboxed and virtualized.  The user could set heap sizes for the amount of memory that the applications were allowed to use.  Google’s native client is a step in this direction, but it doesn’t go far enough.  Scripting language code could be executed pretty much as it is.  This would allow the frameworks to control everything about the experience, as opposed to the browsers.  Innovation could happen overnight, and browsers would be more responsible for enforcing security policies than rendering.

The benefits of this approach, full on applications could be developed as one stack and would appear on the client as the developer wished.  The performance would be insane, the execution environment would be simplified such that we could develop an adequate sandbox.  Authentication would be up to the developer and would be native, many of the security issues would go away.  One could code their application as code + data to render as pages easily.

What are the issues with this approach, no one has managed to build an adequate sandbox, however as we have seen JavaScript + HTML isn’t really a great sandbox either, there are tons of exploits out there.  The delay in downloading the initial code, although with progressive execution, this should be mitigated.  The biggest issue is lack of crawlability.  This is where XHTML 2.0 comes in.  If your content is available as resources through a service, instead of crawling your app, your content could be crawled.  This would dramatically improve the value of search engines.  If you give an appropriate resource id, the engine could point the user to a URI that would render the content in your application instead of raw, or the crawler could be an application that shows your data in a slightly different format.

I think that native jitted code + data mixed together are likely our future, especially watching how the App Store is taking off with consumers.  Even they seem to be choosing native applications delivered over the internet over web apps.  I think that this is a primitive method of what I am describing, but the benefits to the end user and for the user experience are clear.


Mides 1.7 New Features and Changes

Posted: January 25th, 2010 | Author: | Filed under: Apple, iPhone, JavaScript, mides, PHP, Programming | Tags: , , , , , | 2 Comments »

I wrote Mides originally to help me to write web applications when I am on the go.  A huge part of web application development is JavaScript.  The iPhone / iPod is an awesome device for heavy client JavaScript apps.  So as a result, I added JSLint in Mides 1.7 to make debugging JavaScript easier.

The main problem with the developer setting in Mobile Safari is that it is inaccessible to other applications.  Since one of the main purposes of Mides is to enable development with either no, or an unreliable internet connection, it wouldn’t be possible for Mides’ internal HTTP server to run and serve the mobile safari application with content.  This is the entire reason I wrote an HTTP server for Mides, so that JavaScript XHRs would work correctly for testing.

What I have done to help out with JavaScript debugging is to modify Douglas Crockford’s JSLint library slightly to make it work on the iPhone.  It helps out with outright errors, but also with many excellent tips for writing safe and readable JavaScript applications.  You can see the errors and optimizations by tapping on the burst and exclamation point icon when it appears over your JavaScript or HTML.  This feature is optional and can be disabled in the iPhone settings.

Another issue I wanted to address with a new feature is that I always forget the argument, or the exact PHP method call that I want to use, especially around MySQL.  I already had the documentation in there, but since it is a full-text search, it tends to take a while.  So I added a new feature that allows you to look up just the method signature, that is the method name and the arguments to the method.  I didn’t want to put a button in there for this, it just didn’t seem right.  I tried for a while to come up with something usable, and I think I have figured out something that works.  You just need to twist the phone to the right ( or left ) to do the code-completion on the method.  If the text before the cursor matches one or more PHP method signatures, then it will add that value in context, in line into your code with the argument types.  If it matches more than one, it will display a modal dialog that will allow you to choose from the top 5 PHP methods that match what you have typed.

One fix that a customer asked for on getsatisfaction.com/mides was that I make tabs parse properly.  I also added that in Mides 1.7, now your tabs will be properly displayed.  To create a tab, just space 5 chars into the document.

I am adding features both at the request of customers on the burgeoning community on getsatisfaction, as well as through my own usage of the product.  I probably won’t implement all of them, but please keep the suggestions coming.  They help tremendously.  Some of them are really tough to implement, but if they make it more usable I’m all for it.

One of the main issues around Mides is moving files onto and off of the phone, Apple hasn’t made it easy, and FTP is not the best solution, it is a nightmare to support, and difficult for users to set up.  I thought about having a small application that you could install on your Mac and PC that would make it much easier to transfer files with, but this didn’t seem like the best solution either.  I am actively thinking through better ways, but nothing so far has really stuck.

At any rate, I am constantly trying to make Mides more useful, I know it has been rough, but I’m glad to see that some of you are starting to get real use out of Mides.  I hope to keep making it better and eventually to rival and in some ways improve upon the desktop coding experience.


CSS 3 Keyframe Animations: A Step Too Far

Posted: April 16th, 2009 | Author: | Filed under: CSS, JavaScript, Programming | Tags: , , , , | No Comments »

I understand the desire to push the development of web based applications to the next level, and allow them to truly compete with their desktop counterparts.  I like the functionality with the new perspective properties, keyframe animations, transformations and the like.  For years we enjoyed these features as part of Flash, which could never really figure out what it wanted to be.  Was it a programming environment or some sort of design environment?  We continually had poor tools that we made the best of to do some incredible things, but I think its time to take a step back and look at what we are actually doing, and where web application development should be.

For example, have a look at this syntax for defining a keyframe animation:

@-webkit-keyframes 'wobble' {

0 {

left: 100px;

}

40% {

left: 150px;

}

60% {

left: 75px;

}

100% {

left: 100px;

}

}

This is strange in that we are now defining properties in CSS.  This is ultimately a named structure which defines the behavior of a function, that doesn’t exist to, and is not accessible from JavaScript, the supposed language for logic and control.  The problem here is that we have variables and objects that are outside of the scope of the primary programming language.  This creates a second language with overlap, and set of objects that controls and defines data for UI behavior.  So far, the only functional code in CSS is via the pseudo-classes such as :hover which can then trigger the animation or transition outside of JavaScript, but it wouldn’t stretch the imagination to see a future where CSS is doing more.

With HTML 5 we are packing more functional logic into the “structured data” tier of our web applications, which undermines MVC and creates unique problems to developing web applications that traditional desktop application developers do not face.  The complexity introduced by making developers use 3 different functional languages for UI, each with their own data objects, variables, and execution or functional code should not be underestimated, not to mention the complexity of the language they need to use for their application server and its interaction with JavaScript / CSS / HTML  languages.

As more desktop applications are moved to the “cloud” the engineering effort required to build applications approaching the complexity of Eclipse for example would be daunting, and could perhaps be more expensive than developing it natively for each platform.  This is due to platforms becoming less diverse as time goes on through things like Mono for ( C# ), Ruby and Java on Linux, these same things with native UI bridges on Windows and Mac OS X as well.

I don’t really want to see that happen, but for the web as an application development platform to mature, I think these languages need to be consolidated.  Typically in a normal windowing environment, such as Windows, Linux, or Mac OS X, there are libraries that are typically written in C / C++ that drive the UI and access to the hardware.  Previously, a developer would write code to that specific library on a specific platform for performance reasons, and be required to maintain multiple sets of controller / model code.  The performance reasons for doing this today are fading, so abstractions like the Cocoa – Ruby bridge are becoming more prevalent.  This allows a developer to implement most of their business logic once with a minimum of code dealing with specific platforms in the language of their choice.  Most of this code is usually portable to other platforms, including web.

The increasing complexity in web development, and decreasing complexity and homogeneity of desktop platforms / hardware, is setting up what I believe will be a renaissance of desktop style development, or mono-language application development.  I think, however that the new class of desktop / mobile native applications are different than the traditional applications in that their model may be in the cloud with just a data cache on the actual client.

I believe that this approach, having a cloud model with a less complex native client rendering content, represents the best of both worlds.  A robust service based model that makes tables of data accessible to the client, whatever its configuration is optimal for keeping your options open on the UI implementation.

You get the power of the local CPU, a reduction in complexity by utilizing the same language on the back end as the front end ( possibly ), and the flexibility of having the data model in the cloud by writing native clients in Ruby | Java | C++ | Cocoa.  There is a side benefit of being in closer compliance with the MVC pattern that should help reduce complexity and promote code reuse and readability.

Most web developers are doing this today, but they are doing it either by not using HTML / CSS and focusing all of their efforts on JavaScript, and treating HTML as the basic building blocks of a widget approach, building up the “windowing system” themselves, and then using another chunk of JavaScript as the “controller.” Or they are doing it by pre-generating the HTML / CSS / JavaScript on the server, and abstracting it away into their native language ( GWT as an example ).  Both approaches are fine, and they work, but they do not address the increasing complexity of the resulting applications.

I think that the most effective method for solving the problem is, as always, to meet in the middle.  XUL was an early attempt at this, but it still ended up requiring developers to use multiple languages.  What I am hoping to find time to work on is a browser that does all of the existing standards compliant stuff, but in addition to that gives developers a method to use the native windowing system through a scripting language, sort of like using JavaScript to do desktop application logic, but that is downloaded from the server and cached on the client in the HTML 5 manifest method.  The resulting application would use only that language for all UI and logic. The language would be required to bridge over the 3D engine ( OpenGL ), a widget interface that looks like native windows ( could still be web based behind the scenes ), and the sound APIs.  More importantly it would have to handle binary data as native objects and streams.

I think that currently the languages that are good candidates are Ruby, Python, and JavaScript.  Ultimately it would be best if the web application had access to the camera and other hardware with the user’s permission, but I don’t know if that is really required or desired,  Flash currently does a good job of managing permissions around this stuff, but I think more sandboxing is needed for something of the scale I am thinking of.  Microsoft tried it and failed with Active X, which was a good idea, but gave too much access to the local machine.  I’m working on it, but I’m just one guy with a day job, so it takes me a long time.  Eventually I’d like to make it an open source project, but I’m a long way off from it.

To sum it up, I think that for the web languages to keep going the way they are is courting disaster; making CSS more beefy and complicated isn’t a good solution.  Already JavaScript, through the canvas tag, is impinging upon CSS’s supposed territory, but most of this display stuff should have been there exclusively from the beginning.

There is too much overlap between CSS / JavaScript and HTML.  We will be stuck with fail whales everywhere because there are too many points of failure, and too many areas to specialize in.  Even if IE fully embraces web standards the general development pattern will be too complicated to support the pace of development, and the complex feature sets available in desktop applications.  If you look at the cost of hiring a CSS specialist, a JavaScript specialist, and a server side specialist, web startups will be more expensive than desktop or mobile native app startups.  Hiring a “mono client” specialist that can do all of the things that previously required 3 people, it will reduce to cost of application development in the way that AJAX did originally, and keep the the more ambitious web startups viable.  Today for example I saw a startup offering to do video editing through the browser, wow!  The future is super bright for web development, but I think some refactoring is necessary to fuel the next surge of productivity.


Which JavaScript Framework is the Fastest

Posted: April 8th, 2009 | Author: | Filed under: JavaScript, Programming | Tags: , , , , , | No Comments »

I have been wondering for a while which JavaScript framework is the fastest. I was thinking about writing some sort of test to try to determine which one had the best speed, but I have found one that seems to work. Mootools SlickSpeed Test is a good start.

It seems to focus on DOM manipulation / access / iteration speed, rather than testing the functionality built into the frameworks. I suppose that it would be tough since each framework offers different things. When I ran the test, Prototype 1.6.0.2 was the slowest, YUI 2.5.2 was the next slowest, MooTools 1.2 was next up from the bottom, JQuery 1.2.6 was the second fastest, and Dojo 1.1.1 was the fastest by a wide margin in Safari 4 beta, albeit with some errors.

In Google chrome 2.n beta, the results were as follows:

  1. JQuery 1.2.6
  2. MooTools 1.2
  3. Dojo 1.1.1
  4. YUI 2.5.2
  5. Prototype 1.6.0.2

In Firefox 3.0.6

  1. MooTools 1.2
  2. JQuery 1.2.6
  3. Prototype 1.6.0.2
  4. Dojo 1.1.1
  5. YUI 2.5.2

In IE 8 ( Wow IE 8 is slow )

  1. Dojo 1.1.1 ( many errors disqualified )
  2. JQuery 1.2.6
  3. YUI 2.5.2 ( a few errors )
  4. MooTools 1.2
  5. Prototype 1.6.0.2

iPhone Safari ( DNF / Could not run / Simulator)

  1. JQuery 1.2.6
  2. MooTools 1.2
  3. Dojo 1.1.1
  4. Prototype 1.6.0.2
  5. YUI 2.5.2

Android Browser

  1. JQuery 1.2.6
  2. MooTools 1.2
  3. Dojo 1.1.1
  4. Prototype 1.6.0.2
  5. YUI 2.5.2 ( Big Suprise )

What is interesting about these tests is that in general it seems that you should use JQuery if your development pattern involves heavy selector use. I still prefer Prototype because of the programming features that I get with it, even if the selector part is slow. IE 8 breaks a lot of the frameworks. Prototype and JQuery hold up the best it seems. I haven’t really looked at MooTools however.

On mobile devices, you should think long and hard about using any framework that involves added overhead since the devices are really slow. It seems that Dojo supports the built in Safari functions for dom navigation or something. It was wicked fast in Safari 4, but had a few errors. Overall JQuery is probably the best. I guess I’ll have to take a look at it, though reluctantly. I still need to write a test to check iterator performance though.


Safari 4 – Worker Threads… JavaScript Domination

Posted: February 24th, 2009 | Author: | Filed under: Apple, Companies, Google, JavaScript, Microsoft, Programming | Tags: , , , , , , | 1 Comment »

I do hope you will pardon the hyperbole a bit, but If someone had told me a few months ago that we would have JavaScript threading, which I have been begging for for years, built into the HTML standard.  I would have thought they were crazy.  Now we have a situation where Safari 4, Firefox 3.1, Chrome ( Gears ), and IE 8 ( all in beta ) support it.

Lets look into my crystal ball for a minute.  We have a situation where browser based apps are becoming more and more capable all the time.  Where arguably the most efficient method for developing against mobile devices is to use web technologies, and where we have an insanely awesome JavaScript engine available for general use in any programming system in Chrome.  Looking down the line, I can see that JavaScript will be the primary development language once we start seeing implementations for HTML 5 Web Sockets.  It may be there, I just haven’t checked yet…

If you have Safari 4, or the webkit nightlies, you’ve got to check out this link:

JavaScript Ray Tracer

The speed of JavaScript as an interpreted language is up there with any of the others, in fact, Firefox 3.1, Chrome, and Safari 4 are wicked fast.  Soon, we may not need desktop apps at all, and Microsoft’s bungled ActiveX dream may just come to pass.  What an exciting time to be a developer!


Microsoft IE Developer Toolbar

Posted: December 31st, 1969 | Author: | Filed under: JavaScript, Programming, Uncategorized | Tags: , , | No Comments »

Microsoft IE Developer Toolbar

Picture of Irv Owens Web DeveloperI didn't even know about this and it has been out for about a month. Microsoft has heard the cries from web developers used to using Firefox's developer toolbar extension. While it is often pretty easy to validate your pages using Firefox, see how your block level elements are behaving, and look at the DOM of your page using the Firefox extension, it has been almost impossible with the awful lack of tools for Internet Explorer. They have finally addressed this.

The new IE Developer Toolbar has almost everything that its Firefox adversary has, except for the strong javascript debugger. This is very upsetting especially considering the lame debugging that is built into IE today, but with the relative dearth of tools for internet explorer, anything is welcome.

I have found the toolbar to be extremely useful. The DOM inspector is wonderful in that it highlights the selected item if visible to indicate for which item you are viewing properties. If you have to build applications or websites using Internet Explorer at work, I hope you are designing for Firefox at home, no… I guess you always have to design for Internet Explorer, then you will love the new toolbar. I'd suggest that you download it and install it right away.

IE Developer Toolbar


Adobe ColdFusion MX?

Posted: December 31st, 1969 | Author: | Filed under: JavaScript, Programming, Uncategorized | Tags: , , | No Comments »

Adobe ColdFusion MX?

Picture of Irv Owens Web DeveloperNow, I am almost never one to stand in the way of business progress, however this doesn't seem to be a good day for application developers. There are those who believe that with more capital, everything gets better, but this developer is not one of those people. Does Adobe have better marketing than Macromedia? Arguably, no they don't. Has Macromedia done a great job of marketing ColdFusion? No, they haven't. Will a combined Macromedia and Adobe do a better job than an unacquired Macromedia? Probably not. I don't think that Adobe will put the resources that are needed behind future ColdFusion development. It is just too far away from their core business competency. So just where will ColdFusion go?

It makes sense for Adobe to sell off ColdFusion and Flex, kill Freehand, GoLive, and ImageReady, and roll Dreamweaver and Fireworks into their Creative Suite in their place. It also makes sense for them to continue Breeze development as that is well within their abilities. Flash will probably thrive under the combined company as should RoboHelp, etc…

I think that if Microsoft is paying attention, and I believe they are, it makes sense for them to acquire ColdFusion from Adobe and combine it, Flex, and XAML into one ubiquitous language. It wouldn't be too hard to map ColdFusion to XAML and vice-versa. The benefit to Microsoft is that they could phase out ASP altogether and embrace the tag-based ColdFusion as their web development language of choice. After all it is in line with their corporate vision which is apparently to let web developers make desktop applications as easily as they currently build web applications.

While I don't particularly relish the idea of Microsoft owning my current development lanugage of choice, they do know a thing or two about marketing code, and it wouldn't be difficult to have it run on top of the .net framework and Java so that it could be portable. It would of course be a little faster on the .net framework. Besides, Microsoft ColdFusion just sounds better than Adobe ColdFusion. Having used the VisualStudio beta for C#. I really like it. I could get used to this being my development environment for ColdFusion. It would also be nice for Microsoft to release a VisualWebStudio for Mac and PC centering around ColdFusion. While we are speculating, it would also be nice to have a .net framework for Mac and Linux, but this could take a while.

So, lets assume that Adobe has a clue of what they have in ColdFusion. They could begin to use it to develop their own desktop application language around Flex and the standalone Flash player. This is why Redmond's ears will be perked up today, and for the next couple of years. Adobe has interest in delivering 3D over the web, and Flash makes a good vehicle for this. It would be possible to either expand the Flash player into a Flash runtime and use ColdFusion as the language to create all sorts of juicy applications that spanned the web and the desktop. They would then be in a position to deliver a rapid development environment for desktop applications and would compete squarely with Java and Microsoft in this space, albeit with a much better interface aestetic.

I hope the latter is what will happen. I belive that competition in all aspects of technology are good for consumers and the overall business. Still, either way ColdFusion has either a very bright future, or a very convoluted future. I find it interesting that none of the analysts looking at this acquisition are looking at ColdFusion. I guess that is because it isn't the primary business driver for Macromedia, and Adobe is all about graphics, which is my primary concern.

There is a third option, and one which looks really good to me. It is possible that Adobe will simply allow ColdFusion to languish and eventually the product in that form will atrophy and die. This would be bad, but there is an open source movement for an OSS version of ColdFusion. It would be sweet to see this because it would become more robust, more object oriented, and a lot faster. It would also be more secure because of all the eyes on the code. ColdFusion could become an underground hit, much the way that PHP has been getting a lot of attention recently.

Enter Apple. Has anyone been paying attention to the Apple Widgets in the new Tiger? Does anyone get how important this is? Web developers can create really sexy looking desktop applications as widgets using JavaScript and CSS. This has massive implications as there is already a significant installed base of JavaScript developers, and many of them happen to be pretty good at CSS. JavaScript has been seeing a revival of late and I expect that it will continue. Soon delivering cool applications over the web to Mac users will be easier than it is to learn C# and do it for Microsoft users. Enterprises will feel good about building enterprise applications that use these widgets to communicate with Java applications on the back end. Many people are switching to the Macintosh because they feel more secure running Linux than they do Windows, and this is another reason for businesses to embrace the Mac, although many of them don't realize it yet.

All of this will marginalize the need for PC users to upgrade to Longhorn. Microsoft already is going to have a tough sell to businesses based on the stagnation of hardware sales and the poor business case for upgrading. Most large organizations are still running Windows 2000, and they are going to tell them that they have to upgrade every system company wide in order to run this? If they can get away with it, I'd expect most organizations to upgrade to Macintoshes because of their lighter IT demands and more granular controls over user access.

Microsoft has to get XAML right, and it makes sense for them to buy their only real competition which is ColdFusion, especially now that it is owned by an ally who is almost incapable of understanding it, or its fanatical developer base (of which I am proud to be a part). They would probably let Microsoft have it for a song, and ultimately ColdFusion would be a more robust language with wider appeal. This would be a good thing. But Microsoft really needs to get their act together today if they hope to sell even one copy of Longhorn Server. If CF were bundled in the IIS package with this, I would most certainly upgrade to it. I think that most developers who don't have an irrational hatred of Microsoft would too, if it were a serious effort to make both CF and IIS better. My major gripe with Microsoft is that they make consistently boneheaded business decisions, missing the boat entirely in some places, and jumping out with an idea that is ten years ahead of its time in others. I don't hate them for obscure philosophical reasons, in fact I don't hate them at all, I just think they aren't getting the best out of their products or their developer community, and aren't offering their customers what they want.


Dirty Tricks in Web Advertising

Posted: December 31st, 1969 | Author: | Filed under: JavaScript, Programming, Uncategorized | Tags: , , | No Comments »

Dirty Tricks in Web Advertising

Picture of Irv Owens Web DeveloperContrary to what most people believe, web advertising is in its infancy. Many companies are still trying to figure out what works, and what doesn't. Their experiments are understandable, they are trying to figure out an audience that spans all known geographic, ethnic, social, economic, racial, religious, ideological, and moral boundaries, phew! That was a mouthful. There are still even newer marketing demographics and sub-demographics being created while they are trying to figure out how to target the old ones. How on Earth is a marketing / web development studio supposed to get a grip on all of it. The answer is elusive, but first I will say what won't get the job done, then we'll explore some ways to get it done.

The way advertisers won't get a grip on web niches is by utilizing dirty tricks in advertising. This includes, but is not limited to, pop-ups, pop-unders, javascript pop-ups, unwanted javascript redirections, flash pop-ups, spam email, and tacky, poorly designed banner ads. Let's look at these one at a time. There has never been a time in the history of the internet where unsolicited pop-up advertisements have been a good thing. As indicated above, this was forgivable because the internet was new, and this was a new way to reach people. Once, however, people began to hate this method of advertising, and demonstrate it by installing software to prevent pop-ups it should have stopped, right. Wrong, instead web marketers began to circuimvent users' defences and use pop-under ads, or advertisements that would come up and hide behind your top browser window, waiting until you closed your browser. Great idea right?!!? Wrong, that is like letting that one advertising exec with the awful ideas in the office get a shot at a limted run of ads. For example, he comes up with A new cola bottle with an overweight child pouring a bag of sugar with the cola label into his mouth, with a moniker reading cola making a big America even bigger. This runs in limited fashion despite the passionate pleas of every focus group it is exposed to. Cola sees a radical drop in its sales numbers, but instead promotes this guy to creative director, thereby putting the ads on billboard all over the country. Eventually Cola goes out of business, a smouldering ruin of its former greatness.

That should never happen in real life. That is the absurdity of trying to irritate users into adopting your product, it just doesn't make sense, and will end up making a company bankrupt. But it didn't stop there, the anti-pop up software got smarter, and was better able to block pop-under, and javascript pop-up windows. Now, there are always going to be an element of shadyness associated with some companies. That is as true in reality as it is on the web, hence unwanted redirections. But there were and are legitimate companies that have used, and are still using these tactics. Surely by now these companies have gotten the message that users don't want a bunch of pop-ups littering their desktops; and they have. The problem now is that in an effort to be less invasive, they have adopted CSS and Flash pop-ups. Talk about dense! People don't want to wait to get to their content. These are barriers, just like splash pages. People will click away.

Spam email is probably the most reviled thing the internet has ever produced, however companies continue to do it, and they put their (click here to remove yourelf from our list) in like 6pt. font at the bottom of their email surrounded by disclaimer information. Most users at this point aren't even looking at the garbage that comes across in their email. They either delete it immediately, or they look at the ad, remember the vendor so that they can never ever buy anything from them again.

Tacky banner ads are the least of the evils described in this article, but they can be just as distracting as pop-ups. Flashing, excessively moving or audible banner ads are no-nos. If you want people to be able to view your website at work without their bosses going nuts, you should make it look professional so that it blends in with the rest of their applications. Not draw attention to it so that they get a repromand for spending too much time on the net.

So, now that we have explored how not to advertise on the net, let's see how to advertise. When I go to Froogle