Today is a good day to code

Is PRISM Wrong? It’s Complicated

Posted: June 10th, 2013 | Author: | Filed under: Lifestyle, Uncategorized | Tags: , , , , | 2 Comments »

I have been thinking about the surveillance issue a lot over the past few days, and have read a large number of opinions on whether what the NSA is doing is right or justifiable or constitutional.

I think that many people are missing something of the point. The documents we are reading about were never supposed to be released. It is akin to hearing one side of an argument. It is always easy to agree with a single side, you are missing all of the counterpoints which support the opposition’s position.

In the case of surveillance, in the absence of documents that support the need for this level of surveillance we can only speculate about how we got to the Verizon request and PRISM from the Patriot Act. To have a valid public debate, we would need to have access to the FISA court’s proceedings and the analysis of the original signals which led to PRISM.

I don’t feel as though anyone can really say whether or not we should have a program like PRISM, or if we need the massive data mining operations that the NSA has purportedly been undertaking, because all of the documents which would educate us about what is actually happening in the world around us are classified.

What I can say is that I would guess that the justification for this level of intrusion is quite strong. That being said, in the future, we may be glad that what started with carnivore, and has progressed to PRISM has been in place. We just can’t know.

It is unacceptable for our government to just say “trust us, it’s for your own good.” That is not the principal upon which our country was founded. The federal government’s contract with the public is a partnership, not a patriarchy. It would be best for everyone if the federal government’s bias were set back to “what can we have have declassified” from “how can we classify this?”

There is no reason that I can see as to why much of the intelligence leading up to this which is ancient now has to still be completely classified. Some representatives and members of the intelligence community have said that if they redacted the documents to support PRISM, what would be left would be meaningless. Even just the volume of documents could indicate to the public how much support there is for surveillance.

Classifying everything is a cop out. Yes, obviously if we applied the full process of public scrutiny to the realm of homeland security things would take longer, the NSA and DHS would have to respond to legal challenges from all comers. That would be a good thing, we’d probably all end up back where we are today, but we would all be agreed that this is the right thing for the our country given the current environment. Instead, where we are now, is like a mother dragging a kid by the collar to eat their broccoli.

Also, what some have proposed, ceasing use of Facebook, Google, what-have-you, is ridiculous. It is equally ridiculous to call them complicit to something evil or other childish ideas of how companies work. No one at these companies wants to violate the trust of their users in so blatant a way as to immediately destroy their business. To do such a thing would be in breach of their responsibilities to their shareholders. It wouldn’t even make financial sense.

The opposite is actually what is more likely, they will make much more money by doing nothing to exploit your data as far as the government is concerned, rather by using it for marketing intrusions. The only explanation for why they would allow this is that they have no choice. This database is precious to them, if it found its way to a rival company or government it would do real material harm. Even having this double access system is a giant risk, secret or no. They would never do this voluntarily.


There Is A Place For Bitcoin At The Table

Posted: April 22nd, 2013 | Author: | Filed under: Lifestyle | Tags: , , , , | No Comments »

image

While watching the coverage of the recent Bitcoin rise and fall one thing kept striking me as somewhat lacking in the coverage. Most journalists have been constantly speculating about what Bitcoin can do for them in their cush safe federally insured economies, but what they seem to be missing is what Bitcoin cam do for countries with less than stable currencies.

To be fair, I have seen a couple of reports about what Bitcoin means to the rest of the world recently, but I think they are missing a couple of points.  Firstly the media is obsessed with Bitcoin’s worth relative to the Dollar, the Euro, etc…. this is completely irrelevant.  Bitcoin as a storehouse of value is great and all, but to frame it as that limits its potential and belies a lack of imagination.

Right now, if we look at Bitcoin, only those in wealthy nations can mine Bitcoin.  Either you have to have the means to built a quite expensive GPU based miner, you build an FPGA based miner, or you can purchase an ASIC based miner.  All of these things are out of the reach of most in developing nations.  What isn’t out of their reach, however is to make a product or service that would have value on the world market.  Today, for someone in a developing nation to move a product or service, they have to deal with corrupt exporters, or even if they have honest exporters, their goods can not really find their true value on the world market since they are sold to exporters in their probably over-inflated fiat based currencies.

The better way that Bitcoin makes possible is that these entrepreneurs can stake their price in Bitcoin and deal with distributors, shippers, and marketers with currency that has real value, without respect to what their local economy is doing.  This is a more efficient method of rewarding work without the political stigma of a devalued currency attached to it.

In the case that the good is a digital good, such as a book, or music, the musician entrepreneur for example can now produce music, distribute it via bittorrent, and take payment in Bitcoin.  They can do this all without any middlemen.  This is a beautifully efficient system of value exchange that the world has long needed.


How Google Can Save Retail and Give Amazon a Black Eye in the Process

Posted: October 10th, 2012 | Author: | Filed under: Amazon, android, Apple, artificial intelligence, Companies, Google, iPhone, Lifestyle | Tags: , , , , , | 1 Comment »
Closed down retailer

Montgomery ward closed down

Looking at Google’s new maps inside view, it brings to mind a general problem with physical shopping vs online shopping.  With online shopping, I know exactly who has the item that I wish to buy, and I know what the price of that item is.  I can instantly perform comparison shopping without leaving the comfort of my home.  This convenience has a down side as well, when I do not know exactly what I want to buy and am just shopping for entertainment the online experience lacks substance.  It is much more fun to peruse best buy than it is to scroll down a page of picture of gadgets.  This is where Google can help.

One of the things that Google has done that has no clear immediate value to the company is to map the world in extreme detail, this has come to include the inside of stores.  Amazon does not have this capability.  In addition, Google has its hangout technology which, when leveraged with this inside indexing gives Google both a search index of the real world, and the ability to have a high-fidelity experience with an actual salesperson.

Imagine, Google indexes all of the shops in the world, coffee shops, hot dog stands, I mean everything along with real-time inventory of the items in search results.  Then they index those images using OpenCV or some other image recognition technology.  Alongside that, every retailer in the world assigns one or more salespeople inside of the shop to carry a tablet capable of performing a hangout.  Again this represents a giant biz-dev nightmare, but keep bearing with me.

Now comes the beautiful part, I, at home am surfing the web on my tablet when I get the itch to go shopping.  Instead of hopping into my car, I allow Google to suggest stuff that I might be interested in ( Amazon has a huge lead here, but Google will likely catch up due to their having more signals ).  While I’m looking through the suggestions, I see a watch that I am very interested in, so I click into it and it shows me a map of all of the places around me that have that watch.  I click again and ask for a horizontally swipable, inside view of the top 5 locations that have the watch.

I can actually browse the inside of the store, see the display with the watch in high resolution.  There will be a little place that I can click inside the store if I need help as in the watch is not on display, or the shop keeper will be notified that I am browsing.  At this point, the shop keeper can signal that they want to have a hangout with me in g+, or I can swipe to the next place at any time and browse that place.  If I do want to discuss the item in a hangout, I can either initiate or respond to an invitation from the shop keeper.  While on the hangout, the salesperson can express their craft, showing me alternate items, asking me to send data over, such as measurements, we could exchange documents, etc…

This future would be tremendous, and it is something that only Google can do.  But wait, there’s more!  Imagine that at this point with my Google Glasses, now I can have a full AR view with the details of each item coming up in my heads up display along with other shops’ more aggressive deals ( read ads ).  It would be ridiculously awesome!

Ultimately this will level the playing field with online as well as brick-and-mortar retailers, with the brick-and-mortar guys having a slight advantage until the online retailers start hiring sales reps for g+ hangouts or an equivalent technology.  I believe that this will bring a pretty large increase in the number of sales people employed and reverse the current employment drain that retail is experiencing.  It makes perfect sense as to why Amazon is trying to build out its mapping technology as quickly as possible.  It will be interesting to see who wins.


Minority Software Engineers are the Canary in the Coal Mine

Posted: October 31st, 2011 | Author: | Filed under: Companies, Programming | Tags: , , , , | No Comments »

After reading Mitch Kapor’s post “Beyond Arrington and CNN, Let’s Look at the Real Issues” on minority software engineers and founders in Silicon Valley, I felt that I had to respond. I think that we are missing the real point of what it means that there is a dearth of qualified African-American candidates for employment, or as technical founders. What I have found is that in Silicon Valley, the quality of fairness is crucial. Everyone living and working here in the tech industry is terrified of biases of any sort, as it indicates a level of irrationality that would make any engineer nervous. Everyone here prides themselves on detached analysis of value.

I think we are all, including myself, a black engineer and entrepreneur, missing the point. The rate of incarceration, dropping out, divorce, alcoholism, drug-addiction, etc… are all far higher for black americans, also holding true for most minorities, than the mean. No matter what else happens at the point of hiring, the number of people available to hire will suffer, in general there will be fewer potential candidates period. No bias necessary. Many of the people who would have become the artists, programmers, and brilliant business people end up washing out by getting caught in the aforementioned traps.

Why do minorities get caught in these traps? Lack of education for their parents, breeds lack of education for the children. Lack of empathy for the parents, breeds lack of empathy for the children. Data bears this out. This is bad and it needs to be fixed, before attempting to hold industry accountable for their hiring practices.  While the desiccation of minority groups’ chances for success in this country is a serious problem, what is worse is what it portends for the rest of the country.

All of the above traps, are catching more and more people from other groups as well. The decay of the quality of basic education combined with the reduced capacity of many to find meaningful work at the entry level has all but stopped caste movement in our country. Just look around, how many people do you know who have transitioned from poverty to middle-class wealth who was born in the past 20 years? I can tell you, not many.

Eventually, it will not be a situation limited to minorities, the only people who will have the capability to found and or work at a high level in an engineering capacity will be the very few wealthy Americans, foreigners, and immigrants to this country who have had a quality education. Quality educations do not just happen in school either, they happen in a child’s free time. Does your kid just play Call of Duty? Or does COD make your kid curious as to how they make it so that they play COD interleaved with reading books on OpenGL?

It would be trivial to refactor the educational and social systems of our country to facilitate this curiosity, but the other problems of unstable home life, uneven attendance, huge differences of wealth in school districts, and criminally low standards for teachers as well as well as ridiculously low pay make it as impossible as bipartisan budget resolutions.

Startup founders, or people even in a position to think about creating a startup, are a minority of a minority of a minority. There are, strictly speaking, too many variables that could create the appearance of bias to be certain there is one. Only when you have minority founders showing up on Sand Hill Rd with 3 million active users per month, and a 3% conversion rate to paid, but still getting turned down left and right should we think there might be a problem. And let me tell you, if a green 2 foot alien showed up in Mountain View with those numbers, every VC in the valley would be lining up to fund them.

The most important thing for us to do now is to think about what prevents people from getting to the stage where they could consider starting a company, no matter their color, and fix those problems. Until then, it is a useless indulgence to talk about biases in the tech industry.


Adding Machine Learning to Nc3 Bb4 Chess

Posted: September 29th, 2011 | Author: | Filed under: artificial intelligence, chess, JavaScript, nc3bb4, Programming | Tags: , , , , | No Comments »

While the garbochess engine is plenty strong used in the Nc3 Bb4 Chromebook chess game, I thought it would be interesting to look at adjusting the weighting mechanism by sucessful and unsuccessful outcomes.

The first thing I had to look at was how garbochess weights potential moves.  This took me into the super interesting world of bitboards.  A quick aside,  I have been working on mapreduce for the past few weeks, so looking at early methods of dealing with big data ( chess has an estimated ~ 10120 ) legal moves, in order to successfully evaluate all of the possible moves for a given position, plus all of the possible counters, weight them and choose the best possible move given criteria certainly qualifies as big data.

Interestingly, the approach wasn’t the hadoop approach, the hardware to use such brute force methods wasn’t available, instead early chess programmers tried to filter out undesirable moves, or obvious bad moves, moves that had no clear advantage, etc… What they ended up with was a pretty manageable set of moves for a circa 2011 computer.

The way garbochess considers moves, it looks at mobility for a given piece, control of the center, if a capture is possible, what the point differential for a trade would be, etc… and assigns a score for each possible legal move, it then runs through it repeatedly re-scoring the set relative to the available moves, removing the lowest scored moves, etc… eventually coming up with the best possible move.  What I wanted it to consider, was given that and the specific weights, mobility vs actual point value for a given piece, to use a markov chain for reinforcement learning to describe the entire process of a game and then rate each move with a weight enhancement upon endgame moves as being more important.  Every time the machine takes an action that leads to a success, the heavier the bias on the scoring for a given action.  Failure doesn’t automatically nullify the learning, but it definitely has an effect.

Where I got was a rudimentary implementation of this, as a bunch of housekeeping chores popped up, for example, as this is JavaScript, and all I really have is HTML5 storage, how do I store all of the moves while keeping the system responsive, no O(nn) or O(n2) lookups, what I wanted was to keep it O(n). Obviously that called for a HashMap of a sort, but the serialization / deserialization plus the key system were a challenge.  I didn’t want for it to cause too much overhead for the map / scoring system, as the bit twiddling is already pretty efficient, so I did the best that I could using the FEN + PGN.  The FEN is the state for the markov chain, since one could have a given PGN in many situations, and the weighting system could never be applied against the gravity of the situation.

I need to do more work on weighting changes based on how in trouble the machine is, whether they have an advantage or not, etc… But for a start with machine learning in chess, I think it works.


Teaching my 7 Year Old Daughter to Code (Crypto)

Posted: August 24th, 2011 | Author: | Filed under: Lifestyle, Parenting, Programming, Ruby, Teaching Coding, Uncategorized | Tags: , , , , | 1 Comment »

When I started teaching programming to my children, I thought starting with JavaScript was a good idea. I still think that JavaScript is one of the most important languages to learn early in a programming career.  It just doesn’t seem to be the right choice for teaching someone to program when they are 7.

The reason is likely not what one would naturally think, the code isn’t too opaque, and the syntax wasn’t much of a problem for her.  It was just so much work to get output.  With my son, we worked on a really simple Python program working on his hand-me-down OLPC, my daughter was upgraded to an Acer Aspire 1 for her birthday.

With the Python, I felt like we made more progress due to the availability of a REPL.  We were able to make changes to the core code that was solving the problem and see results on the command line quickly.  With the JavaScript, we had to create an HTML page, load it into it, create some sort of markup output, etc… It just wasn’t as clean an approach to programming.

I have been asked by many about using “kid friendly” programming languages.  I think the people working on those are doing good work, however there is nothing “kid unfriendly” about the languages that I use for programming as an adult.  I think that in general when educating our kids, we need to stop coddling them so much.  Creating an approximation of an already dumbed down environment to write software to drive machines will not help them.  Most of the kids that I have seen are already beyond logo and they don’t even know it.  What they seem to want to do are real world things, and there is no reason they can’t.

What I settled on was to use Ruby for the tasks.  It is a language that has a great REPL, and is easier to read.  It also has the benefit of having a sane shell input mechanism as well as not requiring a ton of objects to get started.

We discussed what she wanted to do, there were several things, all of them were deeply social, but the one we settled on, I thought was the easiest to implement.  I thought that encrypting messages to her friend where only she and her friend had the crypt key would be enlightening.

She agreed, so we started into coding it up.  First we ended up working through a few encryption techniques on paper, taking short messages, converting them into their character codes and then shifting them by adding the char codes of each letter of the crypt key to each letter of the message.

def encode msg,key
	coded_msg = ""
	msg.each_char do |letter|
		coded_msg = coded_msg + add_cipher(key,letter)
	end
	return coded_msg
end

def add_cipher cipher,letter
	code = ""
	cipher.each_char do |cl|
		code = code + pad(cl[0] + letter[0])
	end
	return code
end

def pad num

   length=num.to_s.length

   if length > 3 then
      num.to_s
   else
      padded_id='0' * (3-length)  + num.to_s
   end
   return padded_id
end

At first, she put the crypt key into the program, but we discussed that it would be a bad idea since anyone with the source code could then crack the message.  She then asked me how her friend would decode the message.  I told her that the only way was for her to create a “pre-shared key,” something that she told her friend verbally that they would both have to remember, only then could that key be used to decrypt the messages.

What we did was to create a multi step command line program to accept the key and then the message.  We haven’t gotten around to the demux yet, but here is the mux:

print "Enter the crypto key, or die!: "
key = $stdin.gets.chomp

print "Tell me what to do ( 1 for encode, 2 for decode ): "
op = $stdin.gets.chomp

if(op.to_i == 1)
	print "Enter message to encode: "
	message = $stdin.gets.chomp
	coded_message = encode(message, key)
	print "Here is your message: \n"
	print coded_message
else

end

The nice thing about all of this is that the code is approachable, and the execution path makes sense… this happens, then this happens, etc… she can easily understand the flow of this program. We had significant problems with the flow of a client web application.

One of the first things that my daughter noticed was that whenever you make the crypt key longer, the message gets a bit longer, and that the encrypted message was many times longer than the original.  So it is working, not only is she understanding programming, but basic cryptography as well.  The only thing I am concerned about now is what happens when she is encrypting her posts on the social media site du jour at 16 with quantum encryption techniques.  How will I ever crack her codes?


The Post Tablet Era

Posted: July 30th, 2011 | Author: | Filed under: android, Apple, Companies, Google, iPhone, Lifestyle, Media | Tags: , , , , , , | No Comments »
google chromebook

Chromebook

The tablet entered with a huge bang a few years ago.  It was staggering, Apple sold in an incredible number of iPads and forced all of the netbook manufacturers and Google to scramble to produce and release a tablet OS, namely Honeycomb, that was arguably not ready for release.

The result with both the iOS and Honeycomb are two excellent tablet OSs, and Ice Cream Sandwich promises to be a stellar tablet and smartphone OS.  What I have been discovering over the past year plus using both versions of the iPad and the Galaxy Tab 10.1 is that I don’t really need a tablet for general computing.

This is surprising to me.  I built an IDE for the iPad and iPhone after all, and found myself using my own product more on the iPhone for quick edits than I did on the iPad.

I watch an awful lot of netflix on the iPad, and I play games most of the time that I am using it.  I have found that with the Galaxy Tab, my patterns are much the same gaming, watching videos, occasionally reading ( although I still prefer my Kindle hardware to the tablet versions ).

So I am coming to the conclusion that the pundits were right initially, tablets are clearly for content consumption, not content creation.  The reason, however that these devices are not suitable for content creation is worthy of debate, and is an issue that I’d like to take up now.

Natural User Interfaces

The user interaction that most tablets sport as the default is something that is being called a natural user interface, that is an interaction that uses some of the users other senses, such as motion, to perform an on screen action.  The current crop of tablets mainly use touch instead of a dedicated hardware component to facilitate user interaction with the interface.

This lends itself obviously to gaming, and a “kick back” experience of sorts.  The user can use touch, or the gyroscopes to control a character on the screen, this makes logical sense to just about any user.

As an example, many role playing games have a 3/4 view of the game board, that is, the camera is typically at 5 o’clock high, or somewhere thereabouts.  The control scheme for most of these types of games is to touch a place on the screen to send the character to that location.  Role playing games work particularly well on tablets for this reason, they are almost better with a touch interface than a controller.

As another example, car racing games use the accelerometer in the tablets to control an on-screen car.  This works well, unless you are in a position in which your motion is constrained, such as the bed, most of these games provide some sort of alternate touch based interaction that replaces the accelerometer based input.

The problem with using on screen touch points in auto and first person shooter games is that the controller now covers part of the screen, or your hands end up covering important parts of the game world, causing the player to miss part of what is happening.  I know that in my case, it takes away from the FPS experience and makes it so that I typically don’t buy those sorts of games on tablets, but instead prefer to play them on a console.

Natural user interfaces only work when the content is modified such that the user can interact with it sensibly using the available sensors, gyroscope, touch screen, microphone, et cetera.  In a famously bad case of using a natural user interface to interact with content from a platform that uses traditional input, Numbers presents the user with a typical spreadsheet like the one you would find in Excel for your Mac or PC.  The issue here is that Apple didn’t modify the presentation of the content such that it matches the platform.  Arguably there is no way to do this in a form that makes sense.

The interface for Numbers features beautiful graphic design elements, and is generally pleasant, but when you tap on a grid element, a virtual keyboard pops up and you are invited to type into the fields.  Apple has made a numeric keyboard interface which is pretty nice, but anytime you display the virtual keyboard, you haven’t thought hard enough about the problem.  Displaying a grid of content is not useful on this device, it is amazingly useful on the desktop, but it just doesn’t work here.  Inputting large amounts of data is frustrating, and the virtual keyboard makes mistakes all to common, either because of mistyping or the misguided autocorrect.

Modifying Content for the Natural Interface

Most of the people who are buying tablets today appear to be tolerating these issues, my belief is that they are doing this because tablet computers feel like a piece of the future they were promised when they were children, useful or not.  Eventually, they will likely stop using their tablets at all in favor of ultralight laptop computers, or they will relegate the tablet to the living room table as a movie watching and game playing platform.

It is possible to make significant user input acceptable on a tablet, perhaps even pleasurable, by using a bit of creativity.  First, they keyboard is a complete failure.  It has its place, but in most cases it can be replaced by effective gesture (non touch ) and speech recognition.  This is the only viable way for bringing large amounts of content.

On the visualization front, using our example in Numbers, perhaps a flat grid is not something that makes sense on the tablet, maybe we should send the data to a server for analysis and present it as a series of graphs that can be changed by the user, manipulating the graph directly with touch actions, or with spoken commands.  The result of the changes would flow back into the spreadsheet, updating the numbers behind the visualization.

Many would argue that this would not be a rich enough interaction for some of the complex spreadsheets, pivot tables, etc… that they work with, indeed, it likely would not.  Most of these users would not perform these actions on the tablet, instead they would use a MacBook Air, or other lightweight laptop computer.  It takes a huge amount of creativity and intelligence, as well as significant amounts of computer power to manipulate data in this way.

Imagine a speech interface for a word processor that could use the camera to track your facial expressions to augment its speech accuracy.  It could, and should, track your eyes to move the cursor and ask you to correct it when you make a bad face at a misinterpreted sentence.  An application like this could make word processing on a tablet a wonderful experience.

The technology to do most of these things is here.  It is either fragmented with each part patented by a different company, some without any sort of tablet such as the Microsoft with the Kinect.  Or the effort to produce a piece of software to utilize the features of tablet computers to best effect is too great to justify the investment.  For example, doing that sort of work for a word processor doesn’t make sense when people will just jump over to their laptop to use Word.  Would anyone pay $100 up front for an iPad word processing application?  I don’t think so.  Would anyone pay $25 per month for the same application as a service on the iPad?  Its equally doubtful.

What you come to eventually is that, for interacting with content that either naturally lends itself to, or can be easily modified for, the tablet, it is fantastic.  Currently, however it is severely overpriced for how it is being used.  After all, you can get a fairly cheap notebook that can play Netflix and casual games for $200, or 1/3 the price of most tablets.  If you have to carry your laptop anyway, why would you have a tablet at all.  Why wouldn’t you take the Air with you and leave your tablet at home.  It can do everything the tablet can do, and it also can handle any of the content creation that you care to try.

Thinking about the situation, we need to find better business models that will allow for the development of applications that can handle the modifications to content that we need for tablets to be generally useful.  This will take a while, and in the interim it is likely that some companies will produce tablet hybrids, the ASUS Eee Transformer is one tablet that comes to mind.  It is very popular, runs a mobile tablet operating system, but becomes a keyboard wielding notebook in a second.

The Google Chromebook is another example of a lightweight, even in software, laptop that can do most of what a tablet can do, as well as most of what the typical laptop does.  In my own use, excluding building applications for tablets, I always reach for my Chromebook instead of my tablets.  All of this is excluding the huge difference in the difficulty of building applications on the platforms.

Writing applications for tablets is extremely hard with a doubtful return on investment, unless you are making a media or gaming title.  While writing applications for the web is easy and potentially extremely lucrative with many variations on possible business models, and little interference from device manufacturers.

I am starting to think that Ray Ozzie was right when he said that Chrome OS was the future.  It feels more like the near future than the iPad at this point.  The tablet will always have its place, and perhaps with significant advances in natural user interface technology, with accordant price reductions it will start to take over from the laptop.  I am fairly bullish on the natural user interface over the long term, but at the same time I pragmatically understand that we aren’t there yet.  The devices, software, and consumers have a lot of work to do for us to really enter the era of the computerless computing experience.  I am committed to getting there, but I think that the current crop of tablets might be a false start.


Deciding When to Implement Features in a Startup

Posted: April 26th, 2011 | Author: | Filed under: Companies, Programming | Tags: , , , | No Comments »

20110426-102416.jpg

Over the weekend I have been thinking about which features should be implemented and in what order. I realized that it didn’t make sense to try to prioritize each possible feature, as there are thousands of them, because that would take too long, and not necessarily result in a reasonable prioritization. I started to think up a framework for deciding which feature would be implemented when.

One of the first things to think about was the framework of the startup, or which development strategy the organization is using. In my case, we are using a lean-agile approach. This approach suggests that feature decisions should be based upon whether or not the implementation of a feature will affect positively any of the key metrics of the startup. An example would be, if you have a freemium product, conversions to paid from free.

What I would add to that philosophy, is that it isn’t enough that it move one of the metrics, but that it move the right metric for your startup at it’s current growth stage. If you are raising money, then gaining some sort of traction is likely more important than long term customer retention. If you already have product market fit, and you earn revenue through use of the product, then it makes sense to focus on making your software more pleasant to use.

To clarify this for myself, since I tend to obsess about usability a bit, I thought back to when I switched from PC to the Mac. I was telling myself that it was largely because I liked the usability of the product better, while this was true, it wasn’t why. I had just received a video camera at the same time and was playing around with various low-end PC video editing software, it basically sucked. Then I found out that the Mac came with iMovie. iMovie was far from perfect, but it was really good and gave me a capability that I didn’t have before. I could now edit long home movies easily, and burn them to DVD.

I realized that Apple got me to convert on a first order feature, and retained me on a second order feature. That is they gave me a capability that I didn’t have before, thereby converting me from a non-Apple user to an Apple user, and I created mainline revenue for them.

Apple has always been about this, when thinking about the iPhone 1, what did it do for me. Well, it didn’t have a ton of features, some of the ones that it did have were missing obvious things that it needed, like copy and paste, and it took a long time to get them, at first I didn’t understand why, but later it made sense. Apple was optimizing for that initial conversion, the product gave me the capability to use the internet and share photos in a minimal, yet useful way. That was what made me convert. If Microsoft had done that then I would have bought a Microsoft phone that day.

So what I decided was that I needed to figure out what the startup needed, right now we are fund raising and trying to acquire customers, so to me that means that we need to have features that get people to buy. Those would be first order, first priority features. Those things, that would make a material difference for the startup, and answer the question for the customer “what fundamental thing can I do with your product, that I can’t already do.” That is the question you must answer to get people to convert.

Second order features, the features that make your product nicer to use, are all features that are incredibly important. However, depending on your business model, you may be wasting time based on your organization’s goals working on second order features when you haven’t figured out the first order ones. Frequently when you see startups die, this is the reason. They are working on things like performance, or some trick usability thing that is really awesome and hard, so it is sucking up time, meanwhile you aren’t adding users because you have failed to answer the critical question. Maybe you know what your product will do in the end that is transformative, but your prospective customers don’t know. You must work on transformative, game changing, disruptive features first.


Why Mining Instead of Farming in Software can be OK

Posted: April 3rd, 2011 | Author: | Filed under: Programming | Tags: , , , , | 1 Comment »

Wil Shipley wrote a great article about mining vs farming when building software and a software company.  I have been thinking about it and I wanted to offer my perspective on it.  Let’s start with this quote:

The problem with mining in the software business is that it doesn’t work. It creates broken, useless companies.

Founders and angel investors usually don’t particularly care if the companies they created live or die after they sell out, because they’ve gotten their money and moved on. There’s no stigma to having a company you founded fail after you leave it. In fact, again, it’s a badge of honor: “Bob Smith founded Flopper.com, sold it for $46MM, then got out before it tanked! Genius!”

Well, while I agree that there is a clear incentive to get to market and sell, people do like money, and it does create some broken companies, I don’t agree that this is a bad thing.  The entire point of a startup is to search for a viable business model.  Once you find it, occasionally the founders can continue on, but frequently the founders find somebody who can clean up the mess and make a killing off of the awesome opportunity they have discovered.  However, and this is where the broken companies come in, the founders never find a business model.  They love the idea so much they sacrifice too much in quality, spirit, and finance to keep something going that can never work.

So in a healthy economy in general, you are going to have some companies that just have broken business models and terrible code and will fail.  Then you have companies that want to farm, so they take huge amounts of funding, or time, to build up fantastic and impeccable structures of code, but they haven’t released anything so they don’t really know if customers will want it.  They build for years and launch only to find that customers yawn, or worse are antagonistic to the product after the hype cycle.  Then you have a company with great code and developers can never work.  The founders often take more money, rinse… dry… repeat…

Finally, you have startups that, crank out an MVP the code behind it is terrible, the execution is only So / So, but they get it to market and customers love it.  Now they have found a market.  Its time for the professionals to come in, clean up the code, allow the founders to focus on product, get professional managers, etc… and go… go… go…  Unfortunately this is rare, typically terrible architectural decisions will make it challenging for the team to ever get ahead of the fixes to start iterating on the features that customers are telling them that they want.  However, it is frequently necessary to keep analysis-paralysis at bay and get it done.

You can mine, as long as you acknowledge that you are mining.  This is where code and business concept documentation come into play.  It is fine to write terrible code to get to market quickly, as long as you have documented any shortcuts, and in general everything assiduously.  If someone has spent months to come up with a fantastic piece of code but doesn’t document it, it doesn’t really help anyone else learn.  It also makes it difficult, if it ultimately isn’t so clever, for someone else to fix it because they don’t really understand the entire rationale behind what they were thinking when they wrote the code.

The “ramp” of someone getting up to speed with code is largely due to negligent documentation.  You can find this in “farming” companies or “mining” companies.  If someone can read a brief synopses of what a piece of code is supposed to do, why it was supposed to do it, and how, they can typically come to an understanding of what was in the author’s head.  Otherwise, you have a long slog ahead of you reading through the code and either writing out, or mapping in your head all of the possible branches, dead or otherwise.

The secret of success turns out to be so incredibly simple: Work your ass off. Really care about what you’re creating, not the fame or fortune you’ll get. You’ll succeed.

I definitely agree with this part, just focus on your customers.  Make sure they are surprised and delighted when they are using your product.  No one should be in this just for profit.  Mostly because it isn’t worth it.  Coders spend so much time behind their keyboards, they damn well better love what they are doing or building, if they don’t they will quit, or burn out.  This is the real truth behind this developer shortage.  There isn’t a shortage of good developers, there is a shortage of great concepts for them to work on.  If you build it, believe in it, and its awesome, they will come.


Teaching my 7 Year Old Daughter to Code (Mazes)

Posted: February 5th, 2011 | Author: | Filed under: Lifestyle, Parenting, Teaching Coding | No Comments »

Today we advanced in our lessons.  I started teaching my daughter how to code about a month ago.  I started when I was about 7, so I thought it was about high time I got her going.  I bought her an Acer Netbook a while back.  Windows 7 Starter is a bit weird, but it does what we need it to.

I started teaching her JavaScript / CSS / HTML because I wanted for her to be able to see her work immediately, and  UI work is usually best for that.  We went through some basic typing, data structures, and some general information like number systems, hexadecimal, etc… She stayed with me.  We did that for a few lessons.

I typically keep her for about 1 hour per session.  Sometimes she wants to go for longer, sometimes she loses interest after about 20 minutes.  I always let her go when her attention starts to waver.

Today we started on our real project.  The first few things we built were simple, a list of her favorite things, some colored blocks, drawing some lines with the canvas element, but the end goal is to make a maze game for her brother, who loves mazes and is 3.

At first her suggestion was to make mazes manually, but after we thought about it a bit more, she realized that she would need to build many, many mazes to keep her brother busy.  So we started to think through how we could make the computer create the mazes.

My Daughter's Maze Generation Algorithm

My Daughter's Maze Generation Algorithm

Some of the problems we thought through were firstly how the machine should visualize the maze.  We thought through 3D constructs, which were crazy complicated and I suggested we do those later, and eventually we settled on a battleship style construct.

The next thing we started was to define the rules of movement through the maze for the computer while it is creating the path from the beginning to the end.  She decided that the computer should choose an entry point and an end point, and after some discussion we decided that they should be on the outside of the grid.

Then we thought about whether or not we wanted the entry and exit to possibly be the same.  My wife suggested we play simon says for a bit since we were talking about instructions, which was very helpful.  Then we decided that the computer, and her little brother, couldn’t move diagonally, but we needed a way to write a rule that the computer would understand.  The number and letter thing she came up with toward the end of the hour.

Teaching her how to hack is going surprisingly well.  I do not think that I have an overly gifted daughter, but I think that teaching coding by writing code is the best way.  For defining a set of rules though, a piece of paper is always a good first step.

The ultimate difficulty is that she wants to put the game on the iPad, and eventually in the AppStore.  We’ll see how far we get with that.  I suspect that most of the iOS coding will be done by me ;-).  I am hoping that this helps other software engineer parents who want to figure out how to teach their kids the art.  I’ll try to chronicle my setbacks, as well as my successes in subsequent posts.