Today is a good day to code

The Post Tablet Era

Posted: July 30th, 2011 | Author: | Filed under: android, Apple, Companies, Google, iPhone, Lifestyle, Media | Tags: , , , , , , | No Comments »
google chromebook

Chromebook

The tablet entered with a huge bang a few years ago.  It was staggering, Apple sold in an incredible number of iPads and forced all of the netbook manufacturers and Google to scramble to produce and release a tablet OS, namely Honeycomb, that was arguably not ready for release.

The result with both the iOS and Honeycomb are two excellent tablet OSs, and Ice Cream Sandwich promises to be a stellar tablet and smartphone OS.  What I have been discovering over the past year plus using both versions of the iPad and the Galaxy Tab 10.1 is that I don’t really need a tablet for general computing.

This is surprising to me.  I built an IDE for the iPad and iPhone after all, and found myself using my own product more on the iPhone for quick edits than I did on the iPad.

I watch an awful lot of netflix on the iPad, and I play games most of the time that I am using it.  I have found that with the Galaxy Tab, my patterns are much the same gaming, watching videos, occasionally reading ( although I still prefer my Kindle hardware to the tablet versions ).

So I am coming to the conclusion that the pundits were right initially, tablets are clearly for content consumption, not content creation.  The reason, however that these devices are not suitable for content creation is worthy of debate, and is an issue that I’d like to take up now.

Natural User Interfaces

The user interaction that most tablets sport as the default is something that is being called a natural user interface, that is an interaction that uses some of the users other senses, such as motion, to perform an on screen action.  The current crop of tablets mainly use touch instead of a dedicated hardware component to facilitate user interaction with the interface.

This lends itself obviously to gaming, and a “kick back” experience of sorts.  The user can use touch, or the gyroscopes to control a character on the screen, this makes logical sense to just about any user.

As an example, many role playing games have a 3/4 view of the game board, that is, the camera is typically at 5 o’clock high, or somewhere thereabouts.  The control scheme for most of these types of games is to touch a place on the screen to send the character to that location.  Role playing games work particularly well on tablets for this reason, they are almost better with a touch interface than a controller.

As another example, car racing games use the accelerometer in the tablets to control an on-screen car.  This works well, unless you are in a position in which your motion is constrained, such as the bed, most of these games provide some sort of alternate touch based interaction that replaces the accelerometer based input.

The problem with using on screen touch points in auto and first person shooter games is that the controller now covers part of the screen, or your hands end up covering important parts of the game world, causing the player to miss part of what is happening.  I know that in my case, it takes away from the FPS experience and makes it so that I typically don’t buy those sorts of games on tablets, but instead prefer to play them on a console.

Natural user interfaces only work when the content is modified such that the user can interact with it sensibly using the available sensors, gyroscope, touch screen, microphone, et cetera.  In a famously bad case of using a natural user interface to interact with content from a platform that uses traditional input, Numbers presents the user with a typical spreadsheet like the one you would find in Excel for your Mac or PC.  The issue here is that Apple didn’t modify the presentation of the content such that it matches the platform.  Arguably there is no way to do this in a form that makes sense.

The interface for Numbers features beautiful graphic design elements, and is generally pleasant, but when you tap on a grid element, a virtual keyboard pops up and you are invited to type into the fields.  Apple has made a numeric keyboard interface which is pretty nice, but anytime you display the virtual keyboard, you haven’t thought hard enough about the problem.  Displaying a grid of content is not useful on this device, it is amazingly useful on the desktop, but it just doesn’t work here.  Inputting large amounts of data is frustrating, and the virtual keyboard makes mistakes all to common, either because of mistyping or the misguided autocorrect.

Modifying Content for the Natural Interface

Most of the people who are buying tablets today appear to be tolerating these issues, my belief is that they are doing this because tablet computers feel like a piece of the future they were promised when they were children, useful or not.  Eventually, they will likely stop using their tablets at all in favor of ultralight laptop computers, or they will relegate the tablet to the living room table as a movie watching and game playing platform.

It is possible to make significant user input acceptable on a tablet, perhaps even pleasurable, by using a bit of creativity.  First, they keyboard is a complete failure.  It has its place, but in most cases it can be replaced by effective gesture (non touch ) and speech recognition.  This is the only viable way for bringing large amounts of content.

On the visualization front, using our example in Numbers, perhaps a flat grid is not something that makes sense on the tablet, maybe we should send the data to a server for analysis and present it as a series of graphs that can be changed by the user, manipulating the graph directly with touch actions, or with spoken commands.  The result of the changes would flow back into the spreadsheet, updating the numbers behind the visualization.

Many would argue that this would not be a rich enough interaction for some of the complex spreadsheets, pivot tables, etc… that they work with, indeed, it likely would not.  Most of these users would not perform these actions on the tablet, instead they would use a MacBook Air, or other lightweight laptop computer.  It takes a huge amount of creativity and intelligence, as well as significant amounts of computer power to manipulate data in this way.

Imagine a speech interface for a word processor that could use the camera to track your facial expressions to augment its speech accuracy.  It could, and should, track your eyes to move the cursor and ask you to correct it when you make a bad face at a misinterpreted sentence.  An application like this could make word processing on a tablet a wonderful experience.

The technology to do most of these things is here.  It is either fragmented with each part patented by a different company, some without any sort of tablet such as the Microsoft with the Kinect.  Or the effort to produce a piece of software to utilize the features of tablet computers to best effect is too great to justify the investment.  For example, doing that sort of work for a word processor doesn’t make sense when people will just jump over to their laptop to use Word.  Would anyone pay $100 up front for an iPad word processing application?  I don’t think so.  Would anyone pay $25 per month for the same application as a service on the iPad?  Its equally doubtful.

What you come to eventually is that, for interacting with content that either naturally lends itself to, or can be easily modified for, the tablet, it is fantastic.  Currently, however it is severely overpriced for how it is being used.  After all, you can get a fairly cheap notebook that can play Netflix and casual games for $200, or 1/3 the price of most tablets.  If you have to carry your laptop anyway, why would you have a tablet at all.  Why wouldn’t you take the Air with you and leave your tablet at home.  It can do everything the tablet can do, and it also can handle any of the content creation that you care to try.

Thinking about the situation, we need to find better business models that will allow for the development of applications that can handle the modifications to content that we need for tablets to be generally useful.  This will take a while, and in the interim it is likely that some companies will produce tablet hybrids, the ASUS Eee Transformer is one tablet that comes to mind.  It is very popular, runs a mobile tablet operating system, but becomes a keyboard wielding notebook in a second.

The Google Chromebook is another example of a lightweight, even in software, laptop that can do most of what a tablet can do, as well as most of what the typical laptop does.  In my own use, excluding building applications for tablets, I always reach for my Chromebook instead of my tablets.  All of this is excluding the huge difference in the difficulty of building applications on the platforms.

Writing applications for tablets is extremely hard with a doubtful return on investment, unless you are making a media or gaming title.  While writing applications for the web is easy and potentially extremely lucrative with many variations on possible business models, and little interference from device manufacturers.

I am starting to think that Ray Ozzie was right when he said that Chrome OS was the future.  It feels more like the near future than the iPad at this point.  The tablet will always have its place, and perhaps with significant advances in natural user interface technology, with accordant price reductions it will start to take over from the laptop.  I am fairly bullish on the natural user interface over the long term, but at the same time I pragmatically understand that we aren’t there yet.  The devices, software, and consumers have a lot of work to do for us to really enter the era of the computerless computing experience.  I am committed to getting there, but I think that the current crop of tablets might be a false start.


DHH vs TechCrunch vs Groupon

Posted: June 2nd, 2011 | Author: | Filed under: Companies, Groupon, Media | No Comments »

Its funny to look at DHH’s (David Heinemeier Hansson) post about Groupon’s S-1 vs TechCrunch’s

DHH's Groupon Revenue Twitter Post

TechCrunch's Groupon IPO Post


Deciding When to Implement Features in a Startup

Posted: April 26th, 2011 | Author: | Filed under: Companies, Programming | Tags: , , , | No Comments »

20110426-102416.jpg

Over the weekend I have been thinking about which features should be implemented and in what order. I realized that it didn’t make sense to try to prioritize each possible feature, as there are thousands of them, because that would take too long, and not necessarily result in a reasonable prioritization. I started to think up a framework for deciding which feature would be implemented when.

One of the first things to think about was the framework of the startup, or which development strategy the organization is using. In my case, we are using a lean-agile approach. This approach suggests that feature decisions should be based upon whether or not the implementation of a feature will affect positively any of the key metrics of the startup. An example would be, if you have a freemium product, conversions to paid from free.

What I would add to that philosophy, is that it isn’t enough that it move one of the metrics, but that it move the right metric for your startup at it’s current growth stage. If you are raising money, then gaining some sort of traction is likely more important than long term customer retention. If you already have product market fit, and you earn revenue through use of the product, then it makes sense to focus on making your software more pleasant to use.

To clarify this for myself, since I tend to obsess about usability a bit, I thought back to when I switched from PC to the Mac. I was telling myself that it was largely because I liked the usability of the product better, while this was true, it wasn’t why. I had just received a video camera at the same time and was playing around with various low-end PC video editing software, it basically sucked. Then I found out that the Mac came with iMovie. iMovie was far from perfect, but it was really good and gave me a capability that I didn’t have before. I could now edit long home movies easily, and burn them to DVD.

I realized that Apple got me to convert on a first order feature, and retained me on a second order feature. That is they gave me a capability that I didn’t have before, thereby converting me from a non-Apple user to an Apple user, and I created mainline revenue for them.

Apple has always been about this, when thinking about the iPhone 1, what did it do for me. Well, it didn’t have a ton of features, some of the ones that it did have were missing obvious things that it needed, like copy and paste, and it took a long time to get them, at first I didn’t understand why, but later it made sense. Apple was optimizing for that initial conversion, the product gave me the capability to use the internet and share photos in a minimal, yet useful way. That was what made me convert. If Microsoft had done that then I would have bought a Microsoft phone that day.

So what I decided was that I needed to figure out what the startup needed, right now we are fund raising and trying to acquire customers, so to me that means that we need to have features that get people to buy. Those would be first order, first priority features. Those things, that would make a material difference for the startup, and answer the question for the customer “what fundamental thing can I do with your product, that I can’t already do.” That is the question you must answer to get people to convert.

Second order features, the features that make your product nicer to use, are all features that are incredibly important. However, depending on your business model, you may be wasting time based on your organization’s goals working on second order features when you haven’t figured out the first order ones. Frequently when you see startups die, this is the reason. They are working on things like performance, or some trick usability thing that is really awesome and hard, so it is sucking up time, meanwhile you aren’t adding users because you have failed to answer the critical question. Maybe you know what your product will do in the end that is transformative, but your prospective customers don’t know. You must work on transformative, game changing, disruptive features first.


Color may actually be worth $41 million

Posted: March 27th, 2011 | Author: | Filed under: Companies | No Comments »

I’ve been thinking about the hubbub that Color has generated by raising $41 million dollars in venture capital from an either just release, or unreleased iOS and Android app.  Some people are saying it is the heralding of a new bubble, others are saying that it is not.  Being an engineer and a budding usability and product aficionado, as well as a contributor to the EFF I have a slightly different take on it.

First, let’s define what Color is.  People are saying that it is a photo sharing app, that isn’t it at all, that is a placeholder app concept that gets it past censors, and answers simply, the question of what would I use it for?  What Color is, is a mechanism for harnessing the entirety of the sensors in your smartphone, for those who don’t know modern smartphones have a camera ( video camera ), accelerometer, gyroscope, aGPS ( GPS assisted by cellular triangulation ), microphone, and a capacitve touch sensor built into it.  Most mobile applications use this for various applications.

The team at Color seem to have approached this from a slightly different perspective, instead of asking the question, what can we build with all of these sensors, they asked what if all of these sensors were live streaming their data directly to our servers?  From a purely technical perspective this is absolutely brilliant, and they have come up with some very clever techniques to use this data in the service of determining who you are currently around as well as where you are.  This application heralds a paradigm shift in thinking about geo data, instead of where you are, which is marginally important, this application tries to solve the problem of who you are around, which is much more important.

Therein lies the rub, however.  I don’t want this application anywhere near me.  I don’t want my friends using it around me, I don’t want them taking pictures of me with it, etc… The potential for misuse of this data is too great.  It makes Facebook look like someone peering at your house with a telephoto lens, relative to an intruder actually inside your home.  From a privacy perspective it is a disaster.  What is it doing with the audio?  Does it send it to the servers, or is it all local?  Do they do any facial recognition in the pictures, which would lead to them, and people I only marginally know, or don’t know, to knowing who I am, and people who I am around.  There is a perfectly good reason why Google, when doing their augmented reality solution, backed away from facial recognition.  It is too creepy and too much of a risk.  I would tend to agree.

Color’s value as a startup however, has nothing to do with this.  Often having a good idea means doing what your competitors are either unable, or unwilling to do, and they certainly have done a lot that most are unable, and / or unwilling to do.  What that means for them in the future is interesting, but I’ll need to see a few more privacy features, and policy statements from the company before I will use it.  Whether or not the broader audience will use it is unclear and also irrelevant, someone will find a purpose for this product if they can get to 10 million users.  I think they can, but they will need to conceptually tighten up their security and privacy policies first.  There are times where I don’t care if everyone knows where I am, WWDC, Google I/O, SXSW for example, and there are other times where I may not be interested in people knowing where I am and who I am with, when I am at a school play with my kids for example.  I don’t really want the entire internet to know where my kids go to school, what they look like, who they go to school with, what their names are etc…  This thing gives up the user’s entire pattern, and is a stalker / predators dream come true, not to mention the potential for hacking using social engineering.

Bottom line, its amazing tech, I think that they are worth the money, however the policy aspect will hold them back until they establish viable privacy controls.


Why I Am Not Switching to Verizon ( And You Shouldnt Either )

Posted: January 10th, 2011 | Author: | Filed under: Apple, Verizon | Tags: , , | No Comments »

There have been a few times recently that AT&T’s
network has saved my bacon, recently, once in San Diego, a couple
of times in Denver, and a few times in San Francisco. AT&T is
definitely faster on downloads than Verizon, and often faster than
the hotel broadband. Not only that, but Verizon actually costs more
than AT&T, which is ridiculous. Android phones that are wildly
more complicated to use are not a useful proxy for how much
bandwidth the average iPhone user consumes. I think I’ll stick with
the carrier who has experience dealing with the iPhone’s load.

The other thing that bugs me a bit about the whole AT&T sucks thing, is that it is so market dependent. Typically which carrier “sucks” in a given market is whether or not, and how much sub 1 GHz frequency they have in the given market. Since Verizon has a bunch of 700 MHz spectrum here in the bay area, they typically have better signal indoors, etc… Leading people to believe that Verizon is the second coming. It is a similar situation in many markets, but not all so your milage may vary.

As far as voice is concerned, Verizon may have a bit of an edge, but with data, the jury is still out. Technically Verizon should perform marginally better on voice traffic in large markets due to the efficiency of CDMA, but since both carriers are moving to LTE, this is a largely insignificant difference. The real difference maker will be latency, and the size of the backhaul. AT&T has spent the past 3 years or so constantly upgrading it’s back hauls in all of their markets, I haven’t heard anything about Verizon upgrades.

Verizon might do a bit better in their packet latency, but I’d wager that it isn’t enough of a difference to really make anyone switch. AT&T’s latency isn’t that bad for a mobile provider, and it has actually been getting better here in the Bay Area.

I think that everyone who leaves AT&T for Verizon will be complaining the same way they were when they were on AT&T. The smartphone market caught all of them with their pants down, let alone tethering. The rumor is that Verizon will offer an unlimited data plan, firstly, this is unlikely, secondly, if they do I’m sure there will be some ridiculous throttling they will do which will make power users wish they could pay for full speed.

At the end of the day, I’m glad that Verizon is going to finally get the iPhone it will keep AT&T honest. I hope that T-Mobile gets it too so we can have some decent competition. All phone companies are the same, they all want to get the most for the smallest amount of capital invested. Verizon will probably try to louse up the iPhone with their “apps” and end up screwing up the experience in some way. Starting off by reducing the time users have to return their phones is s great way to reduce confidence in their ability to deliver the service level that they claim.


Google’s Vision of the Future is Correct… But They May Not Be The Ones Who Implement It

Posted: January 8th, 2011 | Author: | Filed under: Companies, Google, Lifestyle | Tags: , , , , , , , , | No Comments »

On a drive from Colorado to Las Vegas this past week my daughter and my son were in the back seat of our car using my daughter’s netbook, she has recently turned 7 years old so I bought her a netbook and I am starting to teach her how to code.  My son wanted my daughter to change the video that they were watching and she began to explain how the internet works to him.

She told him that all of her stuff was on the internet ( emphasis mine ) and that the movie that they were watching was the only one that was on her netbook, she explained how her computer was barely useful without the internet, that the internet came from the sky and her computer needed to have a clear view of the sky to receive the internet.  In addition she said that since we were in the car and the roof was obscuring said view that they couldn’t get the internet, and couldn’t change the movie.

Listening to this conversation gave me a bit of pause as I realized that to my children, the internet is an etherial cloud that is always around them.  To me it is a mess of wires, switches and routers with an endpoint that has limited wireless capabilities.  When I thought through it, however, I realized that my kids had never seen a time when someone had to plug in their computer to get to the web.  Plugging in an ethernet cable is as old school as dial-up.

Once that sunk in, I understood that the Cr-48, Google’s Chrome OS netbook is a step in the right direction, and while I am very enthusiastic about several aspects of Google, and in all fairness others’ vision of a web based future, I do not feel that the current approach will work.

A centralized system where all of users’ data lives, and all communications go through is not an architecturally sound approach.  As the number of devices that each user has goes up, the amount, size and types of connections is going to stress the servers exponentially.

It is already incredibly difficult to keep servers running at internet scale, we need entire redundant data centers to keep even small and simple web scale endeavors running.  When you take a step back you realize that a system like Facebook is barely working, it takes constant vigilance and touching to keep it running.  It isn’t like a body where each additional bit adds structural soundness to the overall system, instead each additional bit makes the system more unwieldy and pushes it closer to breaking.

Google is another example of a system that is near the breaking point, obviously they are struggling to keep their physical plant serving their users, and like Facebook they are so clever that they have always been able to meet each challenge and keep it running to date, but looking at the economics of it, the only reason this approach has been endorsed is because of how wildly lucrative mining usage patterns and the data generated by users has been.

I don’t think this will continue to be the case as the web reaches ever larger and larger groups of people. I don’t think any particular centralized infrastructure can scale to every person on the globe, with each individual generating and sharing petabytes of data each year, which is where we are going.

From a security and annoyance perspective, spam, malware, and spyware is going to be an ever increasing, and more dangerous threat.  With so much data centralized in so few companies with such targeted reach, it is pretty easy to send viruses to specific people, or to gain access to specific individuals’ data.  If an advertising company can use a platform to show an ad to you, why can’t a hacker or virus writer?

The other problem that is currently affecting Google severely, with Facebook next is content spam.  It is those parking pages that you come across when you mistype something in Google.  Google should have removed these pages ages ago, but their policy allows for them to exist.  Look at all of the stack overflow clones out there, they add no real value for themselves except for delivering Google adsense off of creative commons content.  What is annoying is that because of the ads, they take forever to load.  Using a search engine like Duck Duck Go things are better, but this is likely only because it is still small.  DDG also says that it will not track its users, that is awesome, but how long will that last?

It is possible for a singly altruistic person to algorithmically remove the crap from the web in their search engine, but eventually it seems that everyone bows to commercial pressure and lets it in in one fashion or another.

Concentrating all of the advertising, content aggregation, and the content in a couple of places seems nearsighted as well.  The best way to make data robust is to distribute it, making Facebook the only place where you keep your pictures, or Google, or Apple for that matter is probably a bad idea, maybe it makes sense to use all three, but that is a nuisance, and these companies are not likely to ever really cooperate.

It seems to me that something more akin to diaspora, with a little bit of Google wave, XMPP, the iTunes App Store, and BitTorrent is a better approach.  Simply, content needs to be pushed out to the edges with small private clouds that are federated.

This destroys most of the value concentrated by the incumbents based on advertising, but creates the opportunity for the free market to bring its forces to bear on the web.  If a particular user has content that is valuable, they can make it available for a fee, as long as a directory service can be created that allows people to find that content, and the ACLs for that content exist on, and are under the control of the creator, that individual’s creation can not be stolen.

Once the web is truly pervasive then this sort of system can be built, it will, however, require new runtimes, new languages, protocols, and operating systems.  This approach is so disruptive that none of the existing large internet companies are likely to pursue it.  I intend to work on it, but I’m so busy that it is difficult.  Fortunately, however my current endeavor is has aspects that are helping me build skills that will be useful for this later, such as the Beam/Erlang/OTP VM.

The benefit is to individuals more than it is to companies, it is similar to the concept of a decentralized power grid.  Each node is a generator and self sufficient and the system is nearly impossible to destroy as long as there is more than one node.


Script to Bundle Running EC2 Instance

Posted: October 1st, 2010 | Author: | Filed under: Amazon, Companies | Tags: , , , , , , | No Comments »

For the past few hours I have been battling with trying to create a script to bundle a running EC2 instance.  After many, S3 access denied, errors,  “Error talking to S3: Server.OperationAborted(409): A conflicting conditional operation is currently in progress against this resource. Please try again.” errors, and “you are trying to upload to a different region than you are bundling in errors, I think I finally have it.

The key is to just allow the script to create the bucket.  I used s3fox to create the bucket, and it built it for no region.  Performing a manifest migration solved one problem, but after that one was solved, I still kept getting the error.  After I deleted the bucket, and changed the name of the bucket in the script, it was still giving me the “OperationAborted” issue.  After a bunch of trying, this script got me to the gold.

#!/bin/bash

remotehost=yourec2instanceaddress

remoteuser=root

bucket=yourbucketname

prefix=yourprefixname

AWS_USER_ID=yourawsuserid

AWS_ACCESS_KEY_ID=yourawsaccesskeyid

AWS_SECRET_ACCESS_KEY=yourawssecretaccesskey

rsync –rsh=”ssh -i
|pathtoyourawskey|.pem” –rsync-path=”sudo rsync” |pathtokeys|{cert,pk}-*.pem $remoteuser@$remotehost:/mnt/

ssh -i |pathtoyourawskey|.pem $remoteuser@$remotehost “\

\

export JAVA_HOME=/usr;\

sudo -E ec2-bundle-vol -r i386 -d /mnt -p $prefix -u $AWS_USER_ID -k /mnt/pk-*.pem -c /mnt/cert-*.pem -s 10240 -e /mnt,/root/.ssh;\

sudo ec2-migrate-manifest –region us-west-1 –manifest /mnt/$prefix.manifest.xml -k /mnt/pk-*.pem -c /mnt/cert-*.pem -a $AWS_ACCESS_KEY_ID -s $AWS_SECRET_ACCESS_KEY

ec2-upload-bundle –location us-west-1 -b $bucket -m /mnt/$prefix.manifest.xml -a $AWS_ACCESS_KEY_ID -s
$AWS_SECRET_ACCESS_KEY –acl aws-exec-read;\

ec2-register –region us-west-1 -K /mnt/pk-*.pem -C /mnt/cert-*.pem $bucket/$prefix.manifest.xml;”

Another super important thing was to ( I use ubuntu ) perform an apt-get ec2-ami-tools to make sure you have the ec2-register installed.  It will install java as a result, but will not set the JAVA_HOME environment variable.  It just wants to be able to find the binary so I added the export JAVA_HOME=/usr.  Java is in /usr/bin/java.

After everything is up, you’ll still have quite the mess in your /mnt folder, but I wrote a cleanup script to deal with that.

Happy scaling!


How Having a Family Changed My Technical Focus

Posted: August 3rd, 2010 | Author: | Filed under: Companies, Lifestyle, Programming | Tags: , , , | No Comments »

When I first started coding, I was interested in building the geekiest, most technical solutions to problems that I could find for fun;  Stuff like IDEs ( Before Mides ), JavaScript dynamic UI generators, stuff to make programming more efficient.  After having my kids, and advancing through the ranks, however I found out what was most important.  I stopped wanting so much to acquire power for power’s sake, I stopped wanting to accumulate stuff, I stopped wanting overwhelming amounts of money to do thing X.  What I wanted in greater quantities was time.

Time is the only truly non-renewable, non-discoverable resource.  For any other physical thing, one can imagine a future in which such a thing is renewed, discovered somewhere out there, or produced from something else, but time can only be conserved.  It is this focus that has driven me to keep getting better as both a businessperson, manager, and programmer.

A few years ago I realized that I had the technical ability to build whatever I wanted, perhaps not as efficiently or elegantly as more experienced developers, but I could get it done.  I got pretty complacent for a while, and I lost the understanding of why I am developing software.  It was then that I fell in with a bunch of user experience people after the bubble burst and I had to find a normal job.  I was building online classes for the Academy of Art University.  I met some of the most amazing graphic and user experience designers I have ever worked with during that time.  They exposed me to another function of code, one that has stuck with me and has become part of my technical focus, that code could provide a better interaction through transferring some value to an end user.  Namely, a well-designed efficient user interface could reverse the clock as it were for some users.  A good UX design could give time back to the user, time spent learning an arbitrary interface with no organic mapping, no feel, time spent being made to feel ignorant, all of that could be given back to the user of the software.  Bonus points could be awarded if it were to make something that the user wanted to do more efficient, such as finding a good restaurant, or figuring out who in the organization was the best person at solving their specific problem.

I became a user interface builder, hoping to figure out some of the magic of user experience design, as this is clearly the most important part of any application.  I got pretty good at it, and being able to make users smile was definitely what kept pushing me to get better, but once I got married, and had kids, I became one of those users, the ones with no time, the ones who wanted stuff to just work.  Some interesting changes began to occur.  The first was that I ditched more complicated tools that provided less value for the time invested and started using Apple computers and software, not because I couldn’t understand the PC stuff, but because I didn’t want to waste the time on making it work the way I wanted it to when I could be spending time with my wife and kids instead.

As I rose in various organizations, I found an entirely new level of wasted time, work consisted of aimless meetings that propagated like a bad virus, developers didn’t document or test their code, leaving me to have to read for hours just to figure out what the code path was in some cases.  In other words, while the goal of the company was to build products to provide value to their customers, that wasn’t what they were spending the bulk of their collective time on, it was just in organizing itself, or fighting divergent agendas.

It is this last set of issues that has once again given me purpose.  I want to give hours back to working people through the code that I write, the teams that I lead, and the businesses that I associate with.  I believe that it is through good internal processes, healthy channels of communication, and heavy use of technical efficiencies that I can give time back to the people that I work with.  I believe that it is through developing software for regular working people at all levels to remove obstacles and to enable them to accomplish more in less time that we can all spend more time with our families, playing video games, developing new businesses or technologies, or just plain loafing around, whatever makes you you.

Part of that is a focus on making every user interaction as excellent as possible given the technology available, and the rest is to choose to build software that has a direct and measurable impact on organizational efficiency.  Some people say that having a family and responsibility slows a developer down, in my case it has sped me up.  I have learned how to delegate, how to focus on what is important.  Most importantly, I have figured out how to execute.  I think that I have learned how to spot time wasters and find / build ( or ask someone with more experience ) solutions to get rid of them.  Only time will tell if I have actually got a handle on this, but I think with some good strategic alliances to shore up my blind spots, I have a good shot at building software that can help keep our time expenditures down.  Time saved is time saved, no matter how difficult it is, its worth making every effort on this front.


Successfully Scaling an Organization

Posted: July 30th, 2010 | Author: | Filed under: Companies, Management | Tags: , , , , | 1 Comment »

Startups face a myriad of issues in the beginning.  They must find a way to become successful, they have to find a way to make sure that their product scales to meet the demand, they need to find ways to raise money, they need to be ever-ready to pivot into another aspect of their target market.  There are so many issues to think about that one often seems to get left behind.  The question of how do you scale your startup people wise?  What do you do to make sure that you have the right talent with the right levels of responsibility?  How can you ensure that you are retaining and challenging your best talent?

The above questions are a devil of a problem that creeps in fairly suddenly and often without announcing itself.  The result is clear enough, most of us have seen it before.  Reduced output, fewer, and fewer new product ideas bubbling up, or at least, fewer and fewer product ideas bubbling up to the executives.  No risk taking from anyone in the company, way too much time spent in meetings.  Most of the prevailing wisdom is that you just have to stay small, do not get large.  I completely agree, that it makes sense for companies to stay small if possible, but sometimes you can’t stay small.  I would argue that it simply isn’t possible for a company like Facebook to be small, the demands and requirements from their customers require lots of construction and cohesive solutions.  What should they do?  Should they break themselves into separate companies? Breaking up is typically not a solution that makes sense.

I’m not certain that I have the answers to these questions either, but I have experienced these issues in most of the places I have worked.  In all of these places, the intent is always good, people want to overcommunicate, they want to make sure that everyone is heard and that all ideas are considered.  Being good listeners and accommodating a marketplace of ideas are what all of the management books talk about.  The issue is that when a company gets too large, the sheer time it takes to do that becomes prohibitively expensive.

The solutions that I have seen put forth, and I intend to use is initially to keep teams small and allow those small teams to retain ownership of some critical pieces of the solution.  In addition to that I want to keep people building by allowing time to develop along their own relevant interests and for them to sharpen their pitch skills by presenting to other teams and gathering feedback.  That will hopefully help the teams grow, and will make it clear who the people with leadership skills are, and allow them to try and occasionally fail in a safe way that keeps the company moving forward.

Having small teams and keeping specific responsibility residing within those teams will help maintain accountability, minimize meetings, and reduce communication overhead, but it requires a lot of work from the vision holders.  Not only do they have to have a stellar and clear vision, they have to have communicated that clearly to each and every person working in the company.  That works fine for a very small company, but when you get larger it is harder and harder to make sure that every understands the goals and dreams of the company.

Funny enough, most of the scaling techniques that are applied to large application servers works well in scaling engineering organizations.  Shared nothing, separation of concerns, etc…  The issues begin to crop up in organizations that are not engineering organizations, and among groups who are not highly intelligent, skilled and motivated like engineers.  I still believe that a company can successfully scale with lower skilled staff, but it requires a concerted effort to decentralize the operations to the point where each small group operates in a largely independent fashion and the company still achieves its goals.

I believe that the answer lies * of course * with technology.  Were senior management to embrace some of the social communication techniques that most teenagers employ, managing large groups of independent teams wouldn’t prove to be so daunting and they could directly provide leadership to a wider group.  It would require for them to work non standard hours, and work longer ones where they are more accessible, but I believe that it should be possible to run a highly efficient and decentralized organization where everyone is actively contributing.

* Update *

Ben Horowitz wrote a great post on how to scale a company.


The Corporate Disconnect : Millenials Against the World

Posted: July 6th, 2010 | Author: | Filed under: Companies, Lifestyle, Programming, Uncategorized | Tags: , , , , , | No Comments »

Disclaimer, I am not a millennial, I am in that strange area between generation X and generation Y, being closer to the Y. What does that have to do with the topic, you ask? It puts me in a unique place to watch the struggle of ideas unfolding between the engineers coming into companies, and the engineers / businesspeople running corporations.  This is not to say that all current executives are outdated, but in many companies, they have failed to update their model of the world to match increasing numbers of their customers, and the incoming flock of engineers.

The fundamental issue is that people who have had success in the past have a hard time considering that what gave them the success in the first place is not likely to continue producing success.  As an example, existing business processes for tracking hours is typically to have each individual estimate ( after the fact ) how many hours they have worked on a specific project on a given day.  The current method, as best as I can tell, is for engineers to estimate via points how much time it takes to perform a given programming task and do a post-mortem if the task takes more points.  But this daily reporting is eliminated, which is a better, more efficient process.  It is also one that revolves around trusting the engineer.

Another example is that it is not uncommon for developers working on a project to push out information about that project to the public via Twitter.  Even down to the level of code commits.  For the users of that product, they can choose to follow the official company feed or they can decide to follow their favorite engineers.  The concept of privacy has been diminished to a large degree in modern companies.  The benefit of this is that users become partners, not only in the debugging and troubleshooting process, but also in the development and planning phases.  You can find, for just about any startup, engineers posting what features they are thinking about and feedback from engaged consumers, either providing amplifications, their own feature suggestions, or strong negations about where the company should be spending its precious resources.  In such an environment, extreme secrecy is a huge liability.  Likewise, within corporations, keeping the status of the company, and what the customers are saying about the products from the engineers is disastrous to engineers’ morale, as well as harmful to the level of understanding of the executives as to what is happening within the company.  In more modern companies, the developers are treated like the partners of the product managers and the executives.

I think most of the fallacy in this regard comes from the manufacturing metaphors that have dominated the majority of the corporate worlds’ view of software development.  When I look at the waterfall method, and some of the organizational structures around engineering departments, what I believe is being attempted is to reduce development to an assembly line with shift managers and the like.  This can’t really work for software engineering  for many obvious reasons, but probably the most obvious is that programmers, even self taught ones have more in common with lawyers than they do with assembly line workers.  Assembly line workers can highly optimize their tasks due to the extremely specific level of requirements, as well as the consistency in their tasks.  Developers, and the product people working with the developers, almost never have requirements detailed enough to complete the given task.  Similarly, developers have a wide latitude to perform tasks in different ways as tools, managerial practices and or technology change, which is nearly daily at this point.  While most manufacturing systems change once every 20 years or so, a particular manufacturing worker can master their skills and have that be applicable for their entire career.

Attorneys are typically highly specialized, and operate with a widely varying set of rules, like software engineers, they need to parse and execute on sets of specifications ( laws ) to the benefit of the person contracting or paying them.  Their interpretation of a given law may not always be standard, but if it achieves the intended goal, then they are considered successful.  This interpretation in law as in software engineering is more of an art than a science.  This variability in going about the job from day to day creates odd management challenges that are being exacerbated for software engineering management as the millennials come into the workforce.  To a large degree, having fewer, more productive, empowered engineers is obviating the need for traditional engineering management.  Of course someone needs to be accountable, but if you have small groups of developers, the group can be accountable for a specific feature.  Small groups of engineers make it easier for them to triage why something went wrong and prevent it from happening again.  Failure is part of the software development process, but it doesn’t have to be a destructive part.

Millennials, and their immediate predecessors appear to be very comfortable with dealing with this sort of environment, they do not seem to need clear guidelines or even a clear goal.  Many software projects that utilized the “agile” philosophy, which even itself is becoming dated, typically manage the process with smaller tasks that everyone in a company seem to be involved in creating.  The new crop of engineers seem to be more comfortable with the self-taught, with it being more about what you can show than what you have done.  Resume’s appear to be losing their value relative to a solid portfolio of open source work and products.  My advice to people in high-school and college about to enter the work force is to work on a portfolio of applications first, or contribute to some open source projects, even more than attempting to get an internship at some big company.  If they can make some money off their portfolio, all the better.  The teams appear to be more distributed, with wide acceptance that each individual is working on their own business ideas not related directly to the company’s goals, or product portfolio.

All of these things fly in the face of the traditional command an control structure, however I believe that it will speed the pace of innovation, and improve the overall level of developers.  Smart companies will harness this multitasking and openness and provide avenues for their developers to contribute new products under a “labs” or a “demo” banner, even if they have nothing to do with the products that the company makes.  These companies will not mind as one of their “labs” projects earns more than their flagship product, and will provide the creator of that product a team and budget to see how far they can go.  That will rapidly become the only way to retain talent as the cost of starting a business online continues to drop.  Executives at these companies will treat their developers as peers in strategy as well as in the software development lifecycle.  It will become clear that this method of structuring a business is correct when not the one, but the many startups offering services begin to completely demolish the incumbents.  It is going to be an interesting ride… are you ready?