Today is a good day to code

The Biggest Trick that Jeff Bezos Ever Pulled

Posted: April 19th, 2012 | Author: | Filed under: Amazon, Apple, Companies, Facebook, Google | Tags: , , , , , , , , | No Comments »

Boa Constrictor

For reasons unknown, it seems that the tech media completely fails to give Jeff Bezos and Amazon the recognition that they deserve.  I believe that this is due to a deliberate strategy executed by Amazon to quietly grab as much mind and market share as they can.  If they continue on their trajectory, they may become unassailable, in fact, they may be already.

There are blogs and podcasts called things like Apple Insider, This Week In Google, Mac Break Weekly, etc… I have yet to hear about any blogs or podcasts about what Amazon is doing week-in and week-out, but in many ways it is much more interesting.  Amazon now handles 1% of consumer internet traffic, pushing all of through its near ubiquitous compute cloud infrastructure.  They are rapidly and efficiently dismantling existing retail.  Amazon is probably on their way to completely owning web commerce.  Amazon has mass amounts of data on what people have, want, and will want based on what they own and buy.  Through their mobile applications they are gathering pricing signals from competitors so that they can use their own cluster computing prowess to spot change pricing.

What is shocking about this is, despite their proficiency, no one discusses how absurdly dominant Amazon has become.  Everyone just treats Amazon running all internet commerce and large swaths of its infrastructure as “the way it is.”  Amazon is more a force of nature at this point than a company.

It isn’t just the tech media that doesn’t give them the credit they deserve, major tech companies aren’t either.  Google and Apple seem ready to laugh off the Kindle Fire while Amazon soaks up more signals.  Microsoft doesn’t even try to match them.  Google’s commerce efforts look half-baked compared to what Amazon does, and they show no signs of trying to do better.

It is absurd to think that with the bitter rivalries we constantly hear about between Apple and Google, Microsoft and Google, Microsoft and Apple, etc… that someone would start a podcast about Amazon.  Fifty years from now technology changes will have toppled Apple, Google, Facebook, and Microsoft, but I’d bet that Amazon would still be around.

Jeff Bezos and his company wield algorithms and data more effectively than anyone else in the industry, despite all the credit we give Google for search.  Their suggestion and comment filtering algorithms are bar none, the best around.  Amazon is integrated into the fabric of our lives and that is something that no other tech company has done to that level.

Amazon will keep doing what Amazon does best, being ruthless, being efficient, executing better than anyone else, and staying ahead of the curve.  As long as we keep ignoring them they are doing their job.  The greatest trick Amazon ever pulled was convincing the world that they didn’t exist.  They have convinced the world that they are just retail.


If the FCC had never existed

Posted: March 12th, 2012 | Author: | Filed under: android, Apple, AT&T, Companies, Google, Verizon | Tags: , , , , , , | No Comments »

image

Much of the tech media is thinking about the problem of wireless spectrum in the wrong way.  It is helpful to think about why the commission was created, and the problems that have been created in the market by its existence.

The FCC/FRC was created in response to complaints by large national carriers that smaller regional broadcasters of dubious quality were broadcasting on all available frequencies with such power that the national broadcasters couldn’t get a clear signal through. 

The government at the time thought it would be a good idea to control the broadcast frequencies so that they would have spectrum available for use with military equipment  domestically. 

So a deal of sorts was struck where in return for being regulated, national broadcasters would receive a monopoly on broadcast licenses.  This would later cause the breakup of these same broadcasters.

The spectrum was allocated by frequency with buffer zones between blocks.  Power output limits were also part of the regulation package, without which, the allocated frequency blocks would have been useless.  This represented a modern and efficient set of controls based on the technology of the era.

Skip ahead a hundred years and see what has happened.  The technology has improved, the regulatory framework is obviated by the technology. However, since we have an intruder in the market preventing normal forces from solving the problem, the same companies who were given the monopoly control which innovations consumers are allowed to buy.  These same companies set, fix prices, and collude to gouge both content companies and consumers to the tune of ridiculous profits.

Let’s spin back to the past and examine what would likely have happened if the FRC and subsequently the FCC had never been created.

In the late thirties, the first experiments with digital technology were beginning to be performed. At the same time, content companies were beginning to have difficulty broadcasting because of band saturation on radio and later television frequencies.

The war had and was driving incredible advances in communications technology, including experiments with digital broadcast technology.

What Bell would have done was to seek profit from all of the major broadcasters of the era. They would have said, “hey, we have this cool way of using digital transmission to allow us to send tremendous amounts of data from different sources to different destinations through the same frequency using codes to distinguish one broadcaster from another. Why don’t you all pay us a small fee to register yourselves with us, and we’ll build a network of general broadcast / receivers throughout the country.  We’ll use more frequencies as necessary, but since we can pack you in, we will always have plenty.”

This didn’t happen for many reasons, and has its own problems, but what would happen next would have been the rapid rise of bundlers.  Companies whose job it was to use the spectrum they chose as efficiently as possible to maximize their own profit. It would have sped the pace of innovation and prevented the mess we have now.

Eventually the government would have stepped in and mandated that these bundlers reserve spectrum for military and government use. The bundlers would have complied.

Fast-forward this system to today.  We would have hundreds of free market companies competing for the most efficient use of spectrum, with AT&T, T-Mobile, and hundreds if not thousands of competing Telecom and internet providers.  The barrier to entry for these providers would be very low since they would just need to pay the bundler of their choice which best suited their needs.

These bundlers would have built powerful networks of broadcast / receivers everywhere, on roadsides, inside buildings, etc.  There would be no lack of spectrum, and no need for excessive, heavy-handed regulation, as each advance in technology would allow the bundlers to use ever less spectrum and sell both the technology and license their spectrum to even more bundlers.

This didn’t happen, so where do we go from here?  There are few places where it is so obvious that regulatory interference has caused irrational behavior in the market as in wireless.  The FCC should embrace digital technology and require broadcasters to form independent corporations to act as the bundlers that I described.  These bundlers would manage the infrastructure for the broadcasters with the broadcasters riding on them.  Once the system was in place, the licensing system would be replaced by power output limits, and the FCC would assume a greatly reduced role.

There are many flaws to my proposal, but we have to get the FCC out of the spectrum licensing business.  Technology is sufficiently advanced that we do not need this frequency based system. It is causing more harm than good.  We need to let the market work to provide better access to everyone.  It is critical that the barriers to entry for wireless carriers to be lowered if we want real competition and innovation in wireless going forward.


Google Should Voluntarily Break Itself Up AT&T Style

Posted: January 21st, 2012 | Author: | Filed under: AT&T, Companies, Facebook, Google, Management, Microsoft, Twitter | Tags: , , , , , , , , | No Comments »

The Bell Telephone system courtesy of thephonebooth.com

When Google added world plus social, at first I didn’t think there was much of a problem. I understood that since Twitter and Facebook limit the ways in which Google interacted with them, it wasn’t really possible for Google to offer truly social search. This cabal between Facebook and Twitter is quite obviously hugely damaging to Google’s future interests as a company. So I also supported the need for Google Plus.

However, as I have been thinking about it, most companies in the past have gotten into trouble, become anti-competitive, or foes of the free market under the banner of simply looking out for their business interests in responding to a threat. Inside most potential monopolies, the issue that crops up after smashing a formidable challenge is when to stop.

Google is promoting G+ as the bulk of its social search, G+ is completely unavoidable as you are using the search engine. This puts Facebook and Twitter at something of a disadvantage. They also promote YouTube in a similar in-your-face manner, putting Vimeo and other web video companies at a disadvantage.

It isn’t hard to imagine a world in which startups don’t even look at web video because YouTube is un-assailable. Similarly one could imagine, though it is more of a stretch, that eventually Facebook and Twitter would whither and die at the hands of Google Plus since there is really only one search engine, and the entire world uses it. That world would be ridiculously anti-competitive, and no one, including Google really wants to see that.

I believe that if Google had had its just desserts, Facebook and twitter would have given it unfettered access to their data, and Google Plus would have been unnecessary. But since they didn’t G+ is more than beneficial for Google’s survival, it is essential. The same thing could be said about YouTube and Google Music in the face of iTunes.

One could argue as well that Google hasn’t been very effective of late at controlling what is going on within the company. Clearly there is a massive amount of resource contention, and a general challenge in keeping everyone on the same page, and playing for the same team. In addition, there is the kind of limited thinking that prevents the company from disrupting its own business units. Microsoft had(has) this problem, so did IBM, and so did AT&T.

AT&T, however operated like a well oiled machine, they had no problem crushing all competition and effectively responding to all challengers. Google is just as innovative as AT&T used to be, they will similarly get through their management issues, in fact I think they are very near this point. Google getting through their effectiveness issues however, is exactly what bothers me; Once they become as effective as AT&T used to be, isn’t that where the government steps in?

So what I propose instead is that Google break itself into separate businesses voluntarily. One of the main rules of business today is never to let a competitor, or government, disrupt you. It is better, and more profitable to disrupt yourself. I would suggest to Google, for this reason, that now is a good time to do it.

I would imagine that Google would become 5 corporations, split along the lines of social, media, search, mobile, and advertising. This would see Google Plus, Reader, Gmail, Google Talk and Google Docs become the Google Social business. Google docs may initially seem like a strange product to call social, but the purpose of Google Docs is to collaborate on work. That is pretty social as far as I’m concerned, in fact, it is probably the most social that people are in general.

The media business would consist of YouTube, Google Music, Google TV, and the nascent Google Games. The search business is self explanatory. Mobile would be Android, but also Motorola with the new purchase. And Google advertising would be their display, print, and television advertising business. Each company could retain a small portion of ownership of the other company that it was dependent upon. For example, Google media might maintain a 5% to 10% stake in Google social such that they can be sure that their requests are heard and honored. All of the business would have a small share of the advertising business, but the total should not add up to more than 40% so that the advertising business could remain autonomous.

The resulting companies would end up becoming far more competitive and profitable than their corresponding business units, due primarily to the need for providing open APIs to the other businesses that need their services. In the process, these businesses would make these APIs available to other startups who could build off of Google’s services as a platform, driving further profitability and end user lock in.

This would in turn surround their competitors, who are still just a simple silo, and who would begin to run into anti-trust concerns themselves. The now ridiculously nimble Google, which could be known as the Googles, would have them surrounded.

As a single entity Google is vulnerable to the same diseases which have, in the past, felled their erstwhile competitors. As multiple independent profitable companies, the Googles could remain dominant for decades. This would be better for the industry as a whole because each Google business with public APIs would provide a platform for numerous job creating profitable startups. C’mon Google, do what is right for the market, and for your business. Don’t wait for the DOJ to hold a gun to your head like AT&T. Even with the government forcing the issue with AT&T, being broken into the baby bells seems to have worked out pretty well for them.


Minority Software Engineers are the Canary in the Coal Mine

Posted: October 31st, 2011 | Author: | Filed under: Companies, Programming | Tags: , , , , | No Comments »

After reading Mitch Kapor’s post “Beyond Arrington and CNN, Let’s Look at the Real Issues” on minority software engineers and founders in Silicon Valley, I felt that I had to respond. I think that we are missing the real point of what it means that there is a dearth of qualified African-American candidates for employment, or as technical founders. What I have found is that in Silicon Valley, the quality of fairness is crucial. Everyone living and working here in the tech industry is terrified of biases of any sort, as it indicates a level of irrationality that would make any engineer nervous. Everyone here prides themselves on detached analysis of value.

I think we are all, including myself, a black engineer and entrepreneur, missing the point. The rate of incarceration, dropping out, divorce, alcoholism, drug-addiction, etc… are all far higher for black americans, also holding true for most minorities, than the mean. No matter what else happens at the point of hiring, the number of people available to hire will suffer, in general there will be fewer potential candidates period. No bias necessary. Many of the people who would have become the artists, programmers, and brilliant business people end up washing out by getting caught in the aforementioned traps.

Why do minorities get caught in these traps? Lack of education for their parents, breeds lack of education for the children. Lack of empathy for the parents, breeds lack of empathy for the children. Data bears this out. This is bad and it needs to be fixed, before attempting to hold industry accountable for their hiring practices.  While the desiccation of minority groups’ chances for success in this country is a serious problem, what is worse is what it portends for the rest of the country.

All of the above traps, are catching more and more people from other groups as well. The decay of the quality of basic education combined with the reduced capacity of many to find meaningful work at the entry level has all but stopped caste movement in our country. Just look around, how many people do you know who have transitioned from poverty to middle-class wealth who was born in the past 20 years? I can tell you, not many.

Eventually, it will not be a situation limited to minorities, the only people who will have the capability to found and or work at a high level in an engineering capacity will be the very few wealthy Americans, foreigners, and immigrants to this country who have had a quality education. Quality educations do not just happen in school either, they happen in a child’s free time. Does your kid just play Call of Duty? Or does COD make your kid curious as to how they make it so that they play COD interleaved with reading books on OpenGL?

It would be trivial to refactor the educational and social systems of our country to facilitate this curiosity, but the other problems of unstable home life, uneven attendance, huge differences of wealth in school districts, and criminally low standards for teachers as well as well as ridiculously low pay make it as impossible as bipartisan budget resolutions.

Startup founders, or people even in a position to think about creating a startup, are a minority of a minority of a minority. There are, strictly speaking, too many variables that could create the appearance of bias to be certain there is one. Only when you have minority founders showing up on Sand Hill Rd with 3 million active users per month, and a 3% conversion rate to paid, but still getting turned down left and right should we think there might be a problem. And let me tell you, if a green 2 foot alien showed up in Mountain View with those numbers, every VC in the valley would be lining up to fund them.

The most important thing for us to do now is to think about what prevents people from getting to the stage where they could consider starting a company, no matter their color, and fix those problems. Until then, it is a useless indulgence to talk about biases in the tech industry.


Adding Machine Learning to Nc3 Bb4 Chess

Posted: September 29th, 2011 | Author: | Filed under: artificial intelligence, chess, JavaScript, nc3bb4, Programming | Tags: , , , , | No Comments »

While the garbochess engine is plenty strong used in the Nc3 Bb4 Chromebook chess game, I thought it would be interesting to look at adjusting the weighting mechanism by sucessful and unsuccessful outcomes.

The first thing I had to look at was how garbochess weights potential moves.  This took me into the super interesting world of bitboards.  A quick aside,  I have been working on mapreduce for the past few weeks, so looking at early methods of dealing with big data ( chess has an estimated ~ 10120 ) legal moves, in order to successfully evaluate all of the possible moves for a given position, plus all of the possible counters, weight them and choose the best possible move given criteria certainly qualifies as big data.

Interestingly, the approach wasn’t the hadoop approach, the hardware to use such brute force methods wasn’t available, instead early chess programmers tried to filter out undesirable moves, or obvious bad moves, moves that had no clear advantage, etc… What they ended up with was a pretty manageable set of moves for a circa 2011 computer.

The way garbochess considers moves, it looks at mobility for a given piece, control of the center, if a capture is possible, what the point differential for a trade would be, etc… and assigns a score for each possible legal move, it then runs through it repeatedly re-scoring the set relative to the available moves, removing the lowest scored moves, etc… eventually coming up with the best possible move.  What I wanted it to consider, was given that and the specific weights, mobility vs actual point value for a given piece, to use a markov chain for reinforcement learning to describe the entire process of a game and then rate each move with a weight enhancement upon endgame moves as being more important.  Every time the machine takes an action that leads to a success, the heavier the bias on the scoring for a given action.  Failure doesn’t automatically nullify the learning, but it definitely has an effect.

Where I got was a rudimentary implementation of this, as a bunch of housekeeping chores popped up, for example, as this is JavaScript, and all I really have is HTML5 storage, how do I store all of the moves while keeping the system responsive, no O(nn) or O(n2) lookups, what I wanted was to keep it O(n). Obviously that called for a HashMap of a sort, but the serialization / deserialization plus the key system were a challenge.  I didn’t want for it to cause too much overhead for the map / scoring system, as the bit twiddling is already pretty efficient, so I did the best that I could using the FEN + PGN.  The FEN is the state for the markov chain, since one could have a given PGN in many situations, and the weighting system could never be applied against the gravity of the situation.

I need to do more work on weighting changes based on how in trouble the machine is, whether they have an advantage or not, etc… But for a start with machine learning in chess, I think it works.


What is Nc3 Bb4

Posted: September 16th, 2011 | Author: | Filed under: chess, nc3bb4 | Tags: , , , , , | 2 Comments »

Other than being Knight to c3, Bishop to b4, and a move from a variation of the French Defense, it is now my hobby HTML5 chess project.  Based on the engine of garbochess, it uses web workers to factor and rank the moves in a map/reduce way.  I used to play chess for an hour and a half every day when I was a kid, and I hadn’t realized how much I missed it until I started playing with my 4 year old recently.  Then when I got onto my Chromebook, there didn’t seem to be any local storage using, HTML5 based apps just for Chrome.  Incidentally, an awesome resource for getting started with chess programming is the chessprogramming wiki.  Check it out if you have an interest as well.

Being a programmer, and starting out looking at map/reduce, I was seduced by the chess bug.  Boy chess programming is fun!  For now, the project is in an MVP state.  I intend to modify garbochess such that the machine will learn how to play better against you by weighting the ranking of moves for successes and failures during the evaluation.  It is going to be tricky to modify it in such a way so as the machine learning algorithm doesn’t get stupid with a simple mod, and that I use a markov chain to make sure to consider the state as a basis for each move down the line.  My current thought is to weight endgame moves logarithmically higher or worse depending on the outcome of the game, however my thinking is likely to change as I learn more.

I do intend to create an online component to this, but I am torn between connecting to freechess.org, and building my own service to facilitate games using XMPP and / or node.js.  I am leaning toward building my own, because freechess’ api is non-existent, however the community over there is awesome.  I guess that since it is a hobby, having a bit of NIH syndrome is probably O.K.  If you like it, please drop me a comment, likewise if you don’t like it, please drop me a comment.

nc3bb4.com


Teaching my 7 Year Old Daughter to Code (Crypto)

Posted: August 24th, 2011 | Author: | Filed under: Lifestyle, Parenting, Programming, Ruby, Teaching Coding, Uncategorized | Tags: , , , , | 1 Comment »

When I started teaching programming to my children, I thought starting with JavaScript was a good idea. I still think that JavaScript is one of the most important languages to learn early in a programming career.  It just doesn’t seem to be the right choice for teaching someone to program when they are 7.

The reason is likely not what one would naturally think, the code isn’t too opaque, and the syntax wasn’t much of a problem for her.  It was just so much work to get output.  With my son, we worked on a really simple Python program working on his hand-me-down OLPC, my daughter was upgraded to an Acer Aspire 1 for her birthday.

With the Python, I felt like we made more progress due to the availability of a REPL.  We were able to make changes to the core code that was solving the problem and see results on the command line quickly.  With the JavaScript, we had to create an HTML page, load it into it, create some sort of markup output, etc… It just wasn’t as clean an approach to programming.

I have been asked by many about using “kid friendly” programming languages.  I think the people working on those are doing good work, however there is nothing “kid unfriendly” about the languages that I use for programming as an adult.  I think that in general when educating our kids, we need to stop coddling them so much.  Creating an approximation of an already dumbed down environment to write software to drive machines will not help them.  Most of the kids that I have seen are already beyond logo and they don’t even know it.  What they seem to want to do are real world things, and there is no reason they can’t.

What I settled on was to use Ruby for the tasks.  It is a language that has a great REPL, and is easier to read.  It also has the benefit of having a sane shell input mechanism as well as not requiring a ton of objects to get started.

We discussed what she wanted to do, there were several things, all of them were deeply social, but the one we settled on, I thought was the easiest to implement.  I thought that encrypting messages to her friend where only she and her friend had the crypt key would be enlightening.

She agreed, so we started into coding it up.  First we ended up working through a few encryption techniques on paper, taking short messages, converting them into their character codes and then shifting them by adding the char codes of each letter of the crypt key to each letter of the message.

def encode msg,key
	coded_msg = ""
	msg.each_char do |letter|
		coded_msg = coded_msg + add_cipher(key,letter)
	end
	return coded_msg
end

def add_cipher cipher,letter
	code = ""
	cipher.each_char do |cl|
		code = code + pad(cl[0] + letter[0])
	end
	return code
end

def pad num

   length=num.to_s.length

   if length > 3 then
      num.to_s
   else
      padded_id='0' * (3-length)  + num.to_s
   end
   return padded_id
end

At first, she put the crypt key into the program, but we discussed that it would be a bad idea since anyone with the source code could then crack the message.  She then asked me how her friend would decode the message.  I told her that the only way was for her to create a “pre-shared key,” something that she told her friend verbally that they would both have to remember, only then could that key be used to decrypt the messages.

What we did was to create a multi step command line program to accept the key and then the message.  We haven’t gotten around to the demux yet, but here is the mux:

print "Enter the crypto key, or die!: "
key = $stdin.gets.chomp

print "Tell me what to do ( 1 for encode, 2 for decode ): "
op = $stdin.gets.chomp

if(op.to_i == 1)
	print "Enter message to encode: "
	message = $stdin.gets.chomp
	coded_message = encode(message, key)
	print "Here is your message: \n"
	print coded_message
else

end

The nice thing about all of this is that the code is approachable, and the execution path makes sense… this happens, then this happens, etc… she can easily understand the flow of this program. We had significant problems with the flow of a client web application.

One of the first things that my daughter noticed was that whenever you make the crypt key longer, the message gets a bit longer, and that the encrypted message was many times longer than the original.  So it is working, not only is she understanding programming, but basic cryptography as well.  The only thing I am concerned about now is what happens when she is encrypting her posts on the social media site du jour at 16 with quantum encryption techniques.  How will I ever crack her codes?


The Post Tablet Era

Posted: July 30th, 2011 | Author: | Filed under: android, Apple, Companies, Google, iPhone, Lifestyle, Media | Tags: , , , , , , | No Comments »
google chromebook

Chromebook

The tablet entered with a huge bang a few years ago.  It was staggering, Apple sold in an incredible number of iPads and forced all of the netbook manufacturers and Google to scramble to produce and release a tablet OS, namely Honeycomb, that was arguably not ready for release.

The result with both the iOS and Honeycomb are two excellent tablet OSs, and Ice Cream Sandwich promises to be a stellar tablet and smartphone OS.  What I have been discovering over the past year plus using both versions of the iPad and the Galaxy Tab 10.1 is that I don’t really need a tablet for general computing.

This is surprising to me.  I built an IDE for the iPad and iPhone after all, and found myself using my own product more on the iPhone for quick edits than I did on the iPad.

I watch an awful lot of netflix on the iPad, and I play games most of the time that I am using it.  I have found that with the Galaxy Tab, my patterns are much the same gaming, watching videos, occasionally reading ( although I still prefer my Kindle hardware to the tablet versions ).

So I am coming to the conclusion that the pundits were right initially, tablets are clearly for content consumption, not content creation.  The reason, however that these devices are not suitable for content creation is worthy of debate, and is an issue that I’d like to take up now.

Natural User Interfaces

The user interaction that most tablets sport as the default is something that is being called a natural user interface, that is an interaction that uses some of the users other senses, such as motion, to perform an on screen action.  The current crop of tablets mainly use touch instead of a dedicated hardware component to facilitate user interaction with the interface.

This lends itself obviously to gaming, and a “kick back” experience of sorts.  The user can use touch, or the gyroscopes to control a character on the screen, this makes logical sense to just about any user.

As an example, many role playing games have a 3/4 view of the game board, that is, the camera is typically at 5 o’clock high, or somewhere thereabouts.  The control scheme for most of these types of games is to touch a place on the screen to send the character to that location.  Role playing games work particularly well on tablets for this reason, they are almost better with a touch interface than a controller.

As another example, car racing games use the accelerometer in the tablets to control an on-screen car.  This works well, unless you are in a position in which your motion is constrained, such as the bed, most of these games provide some sort of alternate touch based interaction that replaces the accelerometer based input.

The problem with using on screen touch points in auto and first person shooter games is that the controller now covers part of the screen, or your hands end up covering important parts of the game world, causing the player to miss part of what is happening.  I know that in my case, it takes away from the FPS experience and makes it so that I typically don’t buy those sorts of games on tablets, but instead prefer to play them on a console.

Natural user interfaces only work when the content is modified such that the user can interact with it sensibly using the available sensors, gyroscope, touch screen, microphone, et cetera.  In a famously bad case of using a natural user interface to interact with content from a platform that uses traditional input, Numbers presents the user with a typical spreadsheet like the one you would find in Excel for your Mac or PC.  The issue here is that Apple didn’t modify the presentation of the content such that it matches the platform.  Arguably there is no way to do this in a form that makes sense.

The interface for Numbers features beautiful graphic design elements, and is generally pleasant, but when you tap on a grid element, a virtual keyboard pops up and you are invited to type into the fields.  Apple has made a numeric keyboard interface which is pretty nice, but anytime you display the virtual keyboard, you haven’t thought hard enough about the problem.  Displaying a grid of content is not useful on this device, it is amazingly useful on the desktop, but it just doesn’t work here.  Inputting large amounts of data is frustrating, and the virtual keyboard makes mistakes all to common, either because of mistyping or the misguided autocorrect.

Modifying Content for the Natural Interface

Most of the people who are buying tablets today appear to be tolerating these issues, my belief is that they are doing this because tablet computers feel like a piece of the future they were promised when they were children, useful or not.  Eventually, they will likely stop using their tablets at all in favor of ultralight laptop computers, or they will relegate the tablet to the living room table as a movie watching and game playing platform.

It is possible to make significant user input acceptable on a tablet, perhaps even pleasurable, by using a bit of creativity.  First, they keyboard is a complete failure.  It has its place, but in most cases it can be replaced by effective gesture (non touch ) and speech recognition.  This is the only viable way for bringing large amounts of content.

On the visualization front, using our example in Numbers, perhaps a flat grid is not something that makes sense on the tablet, maybe we should send the data to a server for analysis and present it as a series of graphs that can be changed by the user, manipulating the graph directly with touch actions, or with spoken commands.  The result of the changes would flow back into the spreadsheet, updating the numbers behind the visualization.

Many would argue that this would not be a rich enough interaction for some of the complex spreadsheets, pivot tables, etc… that they work with, indeed, it likely would not.  Most of these users would not perform these actions on the tablet, instead they would use a MacBook Air, or other lightweight laptop computer.  It takes a huge amount of creativity and intelligence, as well as significant amounts of computer power to manipulate data in this way.

Imagine a speech interface for a word processor that could use the camera to track your facial expressions to augment its speech accuracy.  It could, and should, track your eyes to move the cursor and ask you to correct it when you make a bad face at a misinterpreted sentence.  An application like this could make word processing on a tablet a wonderful experience.

The technology to do most of these things is here.  It is either fragmented with each part patented by a different company, some without any sort of tablet such as the Microsoft with the Kinect.  Or the effort to produce a piece of software to utilize the features of tablet computers to best effect is too great to justify the investment.  For example, doing that sort of work for a word processor doesn’t make sense when people will just jump over to their laptop to use Word.  Would anyone pay $100 up front for an iPad word processing application?  I don’t think so.  Would anyone pay $25 per month for the same application as a service on the iPad?  Its equally doubtful.

What you come to eventually is that, for interacting with content that either naturally lends itself to, or can be easily modified for, the tablet, it is fantastic.  Currently, however it is severely overpriced for how it is being used.  After all, you can get a fairly cheap notebook that can play Netflix and casual games for $200, or 1/3 the price of most tablets.  If you have to carry your laptop anyway, why would you have a tablet at all.  Why wouldn’t you take the Air with you and leave your tablet at home.  It can do everything the tablet can do, and it also can handle any of the content creation that you care to try.

Thinking about the situation, we need to find better business models that will allow for the development of applications that can handle the modifications to content that we need for tablets to be generally useful.  This will take a while, and in the interim it is likely that some companies will produce tablet hybrids, the ASUS Eee Transformer is one tablet that comes to mind.  It is very popular, runs a mobile tablet operating system, but becomes a keyboard wielding notebook in a second.

The Google Chromebook is another example of a lightweight, even in software, laptop that can do most of what a tablet can do, as well as most of what the typical laptop does.  In my own use, excluding building applications for tablets, I always reach for my Chromebook instead of my tablets.  All of this is excluding the huge difference in the difficulty of building applications on the platforms.

Writing applications for tablets is extremely hard with a doubtful return on investment, unless you are making a media or gaming title.  While writing applications for the web is easy and potentially extremely lucrative with many variations on possible business models, and little interference from device manufacturers.

I am starting to think that Ray Ozzie was right when he said that Chrome OS was the future.  It feels more like the near future than the iPad at this point.  The tablet will always have its place, and perhaps with significant advances in natural user interface technology, with accordant price reductions it will start to take over from the laptop.  I am fairly bullish on the natural user interface over the long term, but at the same time I pragmatically understand that we aren’t there yet.  The devices, software, and consumers have a lot of work to do for us to really enter the era of the computerless computing experience.  I am committed to getting there, but I think that the current crop of tablets might be a false start.


DHH vs TechCrunch vs Groupon

Posted: June 2nd, 2011 | Author: | Filed under: Companies, Groupon, Media | No Comments »

Its funny to look at DHH’s (David Heinemeier Hansson) post about Groupon’s S-1 vs TechCrunch’s

DHH's Groupon Revenue Twitter Post

TechCrunch's Groupon IPO Post


Deciding When to Implement Features in a Startup

Posted: April 26th, 2011 | Author: | Filed under: Companies, Programming | Tags: , , , | No Comments »

20110426-102416.jpg

Over the weekend I have been thinking about which features should be implemented and in what order. I realized that it didn’t make sense to try to prioritize each possible feature, as there are thousands of them, because that would take too long, and not necessarily result in a reasonable prioritization. I started to think up a framework for deciding which feature would be implemented when.

One of the first things to think about was the framework of the startup, or which development strategy the organization is using. In my case, we are using a lean-agile approach. This approach suggests that feature decisions should be based upon whether or not the implementation of a feature will affect positively any of the key metrics of the startup. An example would be, if you have a freemium product, conversions to paid from free.

What I would add to that philosophy, is that it isn’t enough that it move one of the metrics, but that it move the right metric for your startup at it’s current growth stage. If you are raising money, then gaining some sort of traction is likely more important than long term customer retention. If you already have product market fit, and you earn revenue through use of the product, then it makes sense to focus on making your software more pleasant to use.

To clarify this for myself, since I tend to obsess about usability a bit, I thought back to when I switched from PC to the Mac. I was telling myself that it was largely because I liked the usability of the product better, while this was true, it wasn’t why. I had just received a video camera at the same time and was playing around with various low-end PC video editing software, it basically sucked. Then I found out that the Mac came with iMovie. iMovie was far from perfect, but it was really good and gave me a capability that I didn’t have before. I could now edit long home movies easily, and burn them to DVD.

I realized that Apple got me to convert on a first order feature, and retained me on a second order feature. That is they gave me a capability that I didn’t have before, thereby converting me from a non-Apple user to an Apple user, and I created mainline revenue for them.

Apple has always been about this, when thinking about the iPhone 1, what did it do for me. Well, it didn’t have a ton of features, some of the ones that it did have were missing obvious things that it needed, like copy and paste, and it took a long time to get them, at first I didn’t understand why, but later it made sense. Apple was optimizing for that initial conversion, the product gave me the capability to use the internet and share photos in a minimal, yet useful way. That was what made me convert. If Microsoft had done that then I would have bought a Microsoft phone that day.

So what I decided was that I needed to figure out what the startup needed, right now we are fund raising and trying to acquire customers, so to me that means that we need to have features that get people to buy. Those would be first order, first priority features. Those things, that would make a material difference for the startup, and answer the question for the customer “what fundamental thing can I do with your product, that I can’t already do.” That is the question you must answer to get people to convert.

Second order features, the features that make your product nicer to use, are all features that are incredibly important. However, depending on your business model, you may be wasting time based on your organization’s goals working on second order features when you haven’t figured out the first order ones. Frequently when you see startups die, this is the reason. They are working on things like performance, or some trick usability thing that is really awesome and hard, so it is sucking up time, meanwhile you aren’t adding users because you have failed to answer the critical question. Maybe you know what your product will do in the end that is transformative, but your prospective customers don’t know. You must work on transformative, game changing, disruptive features first.