Posted: November 27th, 2015 | Author: irv | Filed under: android, AT&T, Companies, Google, Tech Help | Tags: AT&T, carriers, Google, Sprint, t-mobile, verizon | No Comments »
I found out about Project Fi early on, when a few of the blogs began talking about it. At the time I was skeptical not about Google’s ability to pull the project off, but more about whether people would be able to accept routing their voice calls and SMS through Google’s data centers. A couple of weeks ago, I did a ton of reading on Fi and decided based on what I had read to go ahead and sign up for the beta. It took a couple of weeks to get my invite, and even then it took me a few days to think through what it would mean to be a Fi customer.
My thought process went through three distinct gating concerns. The first was whether I felt that my calls, SMS, and network traffic would be safe being piped via VPN from anonymous access points through Google’s data centers and to their ultimate destination. If I could get through the first gate, I felt that it was time to think through the second.
The next gating concern for me was if there was enough value over T-Mobile for me to make the jump. I was already frustrated with T-Mobile’s default opt-in “binge on” promotion, so I felt it was a good time to move. My experience with T-Mobile for the past two years has been nothing but good, and I’m going to keep the rest of my family lines with them, but I use very little data so I figured that I could save a good amount of money by using Project Fi, and I haven’t been wrong.
The final gating factor was whether Google’s customer service would be adequate for my needs. On this I had some of my largest concerns. Google isn’t known for their legendary service and while I don’t typically need a lot of help, with wireless service, you never know. On this front, I haven’t had to use their service yet, so the jury is still out. Their self-help has been excellent so I decided to give it a shot.
I was able to get over the first hurdle, surprisingly easily. I do not believe it is in Google’s interest to do anything untoward with their users’ data as it would destroy their business, so I’m actually not that worried about Google doing anything nefarious with my calls or texts ( if they can even access them ). As far as turning my data over to the government, that wasn’t really a worry as T-Mobile, Sprint, AT&T, and Verizon will all turn over the same data on demand, so that was a push even if Google cooperates. Finally, VPN technology has been around for quite a while, and it is well regarded as secure. So going through a VPN to Google’s data center is much safer as far as I’m concerned than going through some coffee shop network, or Comcast back end. In addition, the quality of service over Wi-Fi would probably exceed most of the spotty coverage I’ve had in some areas with T-Mobile so … gate opened…
After looking through my records for the past few months, I found that I typically use far less than 3 GB of data per month since I am typically in solid Wi-Fi coverage. I was a little worried about how many hotspots there were around me in general and how well the Nexus 5x would traverse the networks, but this, so far, hasn’t been an issue, there are a number of affiliated Wi-Fi hotspots around the bay area, even more in San Francisco, so where I roam there are a number of options that will not cost my precious gigabytes.
I chose the 2 GB of data, bringing me to a total of $40 a month plus tax. So far I have used .19 gigabytes and I’m about forty percent of the way through my plan period, so I’ll be getting a refund at the end. This brings up one of the better things about Fi, which is that they will refund you for any data that you did not use. They also will charge no overage for months in which you surpass your estimated limit. That means that if I need to tether for work one month, I won’t be stuck paying extra during all of the other months. On my line alone, after buying a Nexus 5x and selling my Galaxy S6 Edge, I was out about $45 up front, and I will save about $60 / month on my line alone. So it is definitely a value for me, however YMMV as for some heavy mobile data users, Fi will end up costing you more.
Overall, I have been extremely happy with Fi and would heartily recommend it. It has been stable and with great quality. Using the Nexus 5x has been pretty good, the phone is laggy occasionally, however I think that is owing more to lacking a few optimizations than any inherent limitation in the hardware. Other phones with the same specs running Android 5.x are smoother so I believe that things will get better on that front. The Nexus 5x has had outstanding battery life for me on M and on Fi. Granted I’m always in a good service area, and I am not a “heavy” user. I tend to get around 48 hours of battery life from it.
Posted: March 5th, 2014 | Author: irv | Filed under: Companies, Facebook, Google | Tags: ads, facebook, Google, marketing, network, social | No Comments »
While the current group of social networks are extremely popular and appear wildly unassailable, it is my assertion that they are eminently fallable. Over the past year, it has become obvious that they are having difficulty attracting users. Facebook needs to grow by purchasing smaller companies for ever increasing amounts of money. Google can’t really build a critical mass of users around it’s offering even though it is technically excellent. Even with Twitter, the most non-social of the social networks, there is trouble attracting new users while implementing features to increase their revenue streams.
Arguably, one could claim that as everyone joins the network, there aren’t additional people to add, this is the theory of saturation. While Facebook tries to assert that their daily active users is huge, and people are spending more time than ever logged in to facebook, how many of those actives are just people checking their messages? Or have facebook set up on their mobile and are technically logged in all day. This is a fantasy, people aren’t really using these services as much as they would have you believe.
Originally the promise of social networking was that people who we knew, and even more compelling, people that we don’t really know would create excellent and relevant content, thereby attracting even more people who would create great content. This would create the virtuous cycle of content creation and given the user growth would make the platform lucrative in it’s advertising reach. This is all known, what I believe everyone is currently ignoring in bubble, unicorn herd fashion, is that the cycle has been severely weakened, and their revenue models are broken.
What is ironic is that it has been weakened by the very thing that has made such scale on the internet viable: The desire of advertisers to pay for access to the social users.
It is a scenario that plays out in every market everywhere. Initially someone produces something of value, and the market forms around that. As the product evolves, the producer can easily see what works, as far as encouraging margin and price growth, and what doesn’t work, what causes price and margin decrease.
Most recently we have seen this occur in the PC market. We are now to the point where vendors are completely optimizing on a single dimension, price. Computer buyers ( obviously except those who purchase macs ) have spoken with their dollars, and their dollars want the best value for money. Hence, netbooks, and $300 laptops loaded to the gills with shizware. While it is shocking that people will accept this, this is what the consumer has chosen.
The social space works the same way. The users of the social product, gmail, facebook, google search, etc… are the product, and the advertisers are the customer. At first, users were drawn to the utility and functionality of the services. In Facebook’s case, interestingly, the initial value proposition was that one could have a private relationship out of the view of the internet with their friends. There was originally no danger that their content would appear to anyone for whom they had decided it shouldn’t.
As time has continued and these services have attained what they believe is a critical mass of users, the impetus for them to improve the service and protect their users’ privacy or to provide real value to the users diminished. The incremental income for each new user was less than could be made by increasing the amount that the advertisers were willing to pay. This has been accomplished either increasing, or inventing new areas in which to deliver ads, I.E. the facebook feed, paper, etc… In effect allowing the service to sell more inventory and spam their internal user base. Or taking content that was originally private, but could be used to deliver ads, and making it public. I won’t even start on the morality of the latter, but they have no choice, a free product at internet scale can not serve two masters, but it has to.
As anecdotal evidence, try to remember the last time you saw something interesting in the facebook feed, something that really grabbed you in a meaningful way. If you are like me, you can’t really ever remember anything really valuable that you’ve seen in the feed.
In actuality, the feed is built, designed, and optimized to deliver ads, not to deliver content of the highest quality to it’s users. In fact, the deeper the quality content is buried, the more ads you have to wade through to find it, thereby increasing the services’ revenue.
What all of this has resulted in, is a number of once useful services, that have thoroughly optimized themselves to deliver ads, and have intrinsically lost their original value to the users. This is what killed myspace, friendster, etc… This will ultimately kill Google ( albeit more slowly ), Facebook, and probably ultimately Twitter.
The reason the current model of social networks is untenable is that they are all designed around ads. None of them, at least the “big successful” ones are designed around users paying, and optimizing around value for the paying user. This will cause the end of the great social free ad-subsidized internet bubble at some point.
The reason I suggest that it will kill Google more slowly, if at all, is that Google obviously realizes that it’s current revenue model is untenable. They are aggressively seeking out real value for money products to which they can transition when the ad revenue model dries up and the users flee their free online services. People are just bored with these sites, there is nothing on them.
The same thing has happened to television. The reason people are “cord cutting” is because bundling is designed to deliver advertising, not value to the TV services’ customers. People aren’t stupid forever, eventually they realize they are being hornswaggled, basically paying twice. Once in their monthly bill, the second time with their time. It is just a matter of when.
Posted: October 10th, 2012 | Author: irv | Filed under: Amazon, android, Apple, artificial intelligence, Companies, Google, iPhone, Lifestyle | Tags: amazon, Electeonics, Google, Retail, Shopping, Stores | 1 Comment »
Montgomery ward closed down
Looking at Google’s new maps inside view, it brings to mind a general problem with physical shopping vs online shopping. With online shopping, I know exactly who has the item that I wish to buy, and I know what the price of that item is. I can instantly perform comparison shopping without leaving the comfort of my home. This convenience has a down side as well, when I do not know exactly what I want to buy and am just shopping for entertainment the online experience lacks substance. It is much more fun to peruse best buy than it is to scroll down a page of picture of gadgets. This is where Google can help.
One of the things that Google has done that has no clear immediate value to the company is to map the world in extreme detail, this has come to include the inside of stores. Amazon does not have this capability. In addition, Google has its hangout technology which, when leveraged with this inside indexing gives Google both a search index of the real world, and the ability to have a high-fidelity experience with an actual salesperson.
Imagine, Google indexes all of the shops in the world, coffee shops, hot dog stands, I mean everything along with real-time inventory of the items in search results. Then they index those images using OpenCV or some other image recognition technology. Alongside that, every retailer in the world assigns one or more salespeople inside of the shop to carry a tablet capable of performing a hangout. Again this represents a giant biz-dev nightmare, but keep bearing with me.
Now comes the beautiful part, I, at home am surfing the web on my tablet when I get the itch to go shopping. Instead of hopping into my car, I allow Google to suggest stuff that I might be interested in ( Amazon has a huge lead here, but Google will likely catch up due to their having more signals ). While I’m looking through the suggestions, I see a watch that I am very interested in, so I click into it and it shows me a map of all of the places around me that have that watch. I click again and ask for a horizontally swipable, inside view of the top 5 locations that have the watch.
I can actually browse the inside of the store, see the display with the watch in high resolution. There will be a little place that I can click inside the store if I need help as in the watch is not on display, or the shop keeper will be notified that I am browsing. At this point, the shop keeper can signal that they want to have a hangout with me in g+, or I can swipe to the next place at any time and browse that place. If I do want to discuss the item in a hangout, I can either initiate or respond to an invitation from the shop keeper. While on the hangout, the salesperson can express their craft, showing me alternate items, asking me to send data over, such as measurements, we could exchange documents, etc…
This future would be tremendous, and it is something that only Google can do. But wait, there’s more! Imagine that at this point with my Google Glasses, now I can have a full AR view with the details of each item coming up in my heads up display along with other shops’ more aggressive deals ( read ads ). It would be ridiculously awesome!
Ultimately this will level the playing field with online as well as brick-and-mortar retailers, with the brick-and-mortar guys having a slight advantage until the online retailers start hiring sales reps for g+ hangouts or an equivalent technology. I believe that this will bring a pretty large increase in the number of sales people employed and reverse the current employment drain that retail is experiencing. It makes perfect sense as to why Amazon is trying to build out its mapping technology as quickly as possible. It will be interesting to see who wins.
Posted: September 6th, 2012 | Author: irv | Filed under: Amazon, Apple, Companies, Facebook, Google | 1 Comment »
Pop-Up Advertising ( Wikipedia )
Facebook’s stock price has been dropping like a rock. Most pundits exclaim that the reason is because there is no “mobile strategy.” Some attribute it to the weakness of the virtual goods market. Rarely does someone come out and say that it is because by-and-large banner and link advertising doesn’t really work on the internet.
Facebook and Google both make their living off of display advertising. Adam Curry had some insightful comments on TWiT the other day and I think he was pretty much dead-on. There are, however a few points which I don’t think the show really got to cover.
Display advertising doesn’t work. Let’s get that out of the way. Earning money by trying to get people to click on banner, or link advertising will never work over the long haul. The only case in which those ads create any sort of value is in brand recognition. The only companies which are interested in brand recognition are the ones whose revenues are upwards of 20 million. In other words, not most of the companies which are interested in advertising. Definitely not the local companies like the cleaners or the barber shop.
For the majority of businesses, which are not VC rich internet businesses, or multi-billion dollar conglomerates, sponsoring a school basketball team is a better way to spend advertising dollars than ads on Google, Facebook, or anywhere else on the internet.
What Adam Curry said about supply and demand was true, but he missed one aspect. The reason that the advertising on The Verge works for them ( for now ) is that the brands advertising on the site just want to be associated with cool and hip, which is The Verge. It is more aspiration on the part of the advertiser than real value for value.
Cost-Per-Impression ( CPM ); or the original billing metric for display advertising was based on how many times a given ad was displayed, later revised to be whenever an actual person, not a bot, saw the ad. Relatively quickly, after things settled with the innovation that Google brought to the marketplace, advertisers realized that an impression was truly worth near zero, and clicks were the thing. Google then changed to Cost-Per-Click ( CPC ) as the billing metric and things got marginally better. The problem now is that clicks can’t really be trusted for a myriad of reasons. The primary reason is gaming and fraud. One can pay huge numbers of people peanuts in western currencies to click on a given ad. No matter how clever the algorithms get, they were still designed by people and other people will be able to figure out how they work. Until algorithms in a completely unrelated area can be devised entirely by machines can they be truly opaque to humans. Even then, there is still the chance that they can be reverse engineered since they would have to be governed by logic.
Let’s get back to Facebook and Google and why Facebook’s stock is in the toilet. Assuming that impressions are worth near zero and clicks are worth next to zero one would claim that volume would drive revenue. This is true in the case of Google. Google keeps their operating costs at next to nothing given their size and they have an absurd number of people using the product. Google can then make some money based on impressions and clicks. This is not the same amount of money as they would be making if the same number of people were actually buying some product at a fixed margin from them, but a decent amount of money given their investment.
Google has an impending problem, which is that CPM near zero is rapidly becoming zero and the CPC next to zero is dropping to zero as well. Google can shuffle costs and can produce hardware and other products ( Google Play, Q, Drive, etc… ) to earn more money. In other words, Google is trying to figure out something else to do to earn profit. Facebook on the other hand has no other prospects, and they are not nearly as thrifty as Google. The next big thing, that people in the ad game are suggesting, will be social advertising, or that you will buy the same things that your friends. I don’t buy that, it is too easy to game, and the basic hypothesis is flawed, I think that there is a kernel of value there, but not enough to support multiple billions of dollars in revenue over 20 years.
I can not write Facebook or Yahoo completely off, however and here is why: Apple.
If you care to remember, Apple competed in two “dying” markets where there was “no more money to be earned” and the “barriers to entry were too high.” Those two markets were personal computers and portable music players. If we look at those two markets, we see that IBM, Dell, Compaq, HP and others were enjoying decent margins while Apple was slowly dying. When Steve Jobs came back to Apple, he focused relentlessly on improving the product and charging for it (AKA value for value ). He produced PCs and laptops that were outwardly the same as their competitors more or less with the exception of a few design flourishes. They produced a PMP that was competitive at first blush, but certainly not generations ahead of its competitors.
The reason that Apple later trounced them all was that in the race to the bottom, the focus was on what corners to cut. If someone was using resistors that had 5% tolerance, a competitor would drop it to 2% tolerance to save money and reduce the price to the customer. If someone was using hard-drives that had 36,000 hours MTBF ( Mean-Time Between Failures ) they would find someone to charge them less for drives that had 32,000 hours MTBF.
In this number-crunching mutually-assured destruction competition, everyone took their eyes off of the simple fact that what they were producing was shit, and it was becoming shittier by the generation. I can’t even say the word Compaq without thinking about burnt out toaster. They were the worst PCs ever.
Apple, at the time, kept making the best computers they possibly could, in every dimension. The software was top-notch, the hardware was top-notch. The Apple computer or iPod would look good, be easy to use, last forever, and if it didn’t they would replace it immediately. That has been changing a bit recently, but it was true for their PMPs and computers in the very early 21st century. As such, Apple could easily charge 20% to 40% more for a comparable item than their erstwhile foes. HP, IBM, Dell, and Compaq, however couldn’t understand how they were doing it, or why their customers were defecting to Macs. It didn’t make sense to them, why would someone want to pay so much more for a similar product. Most of them thought it was an anomalous blip, that there must just be more artsy people than they had originally thought.
What they didn’t understand was that many people just got sick of taking computers back, dealing with stonewalling customer service, etc… They just wanted something that worked. After the 3rd or 4th failure, they probably were complaining about it to a friend who said, well I have a Mac, and if something is wrong I just take it in and they take care of it ( social advertising for you ). That is all it takes for most people when they are completely ready to buy something else that really solves their need or want. In other words, value for value. In fact, probably because the others set a floor so low for the products, it helped with the image of Apple as high-end, cementing the enhanced value of what Apple was offering in their minds.
I digress. I would argue that Facebook has an opportunity to not enter this awful battle to the bottom for online advertising. They could try to do something that provides real, true value for people. I have no idea what this looks like, probably being right there when someone decides to buy something, to help them find the exact right product, similar to Amazon, who I view more as an advocate for me, helping me find just the right thing when I want it, not before or after. Or maybe they could go the sinister way and try to make me buy crap that I don’t need through Psycho-analysis and data mining. Either way would make them truckloads of money, but it has to be high quality.
There is still one point of caution for anyone involved in mobile or online advertising, or anyone thinking about investing in that space. Apple, that great beacon that refused to battle to the bottom, tried to do something new and fresh with advertising – iAd. It failed. If they can’t do it, perhaps no one can.
Posted: June 15th, 2012 | Author: irv | Filed under: Apple, Companies, Google, Microsoft | Tags: android, Apple, Google, iOS, Microsoft, windows phone | 1 Comment »
The surprises everyone was waiting for from the Apple’s WWDC keynote never arrived. Instead, we got a handful of evolutionary features added to generally excellent software, and an amazing piece of hardware. I was actually yawning while following the liveblog. That fact should have the entire tech industry shaking and quaking. That boring keynote just put everyone on notice, but they may not realize it yet.
Apple has done this before. A few years before the launch of the iPhone, the iPod, iMac, and Mac OS X went through a period of minor updates, feature and spec bumps. All of these products never became any less incredible, but Apple wasn’t doing anything exciting.
We know now that Apple had a light guard working on continuing to bump aspects of their main product lines, while the majority of the engineers were toiling deep into the night to build iOS and all of the apps that we all know and love that launched on that device, namely mobile Safari.
It took them several years, while they were consolidating their dominance of the PMP market to completely disrupt everything we consider true about mobile computing. That is not to say that the products they launched in the interregnum weren’t great. The iPod nano launched among other things, but I remember thinking along similar lines as others, is this all you’ve got Apple?
The answer today was obviously, No. They had much more, and knew it.
We are seeing the same general stagnation today. It makes you wonder, what the hell are they doing in there? There is really no way to know, but when it is ready I would expect no less disruption than we saw when the iPhone came out. Apple has maybe 14,000 engineers, do you really think that all of them are working on iOS 6, Mountain Lion, or trying to make the MacBook Pro thinner?
Apple takes their time, so it could be six months, or it could be three years. If I were a competitor of Apples, I’d be getting ready to be disrupted.
I’d think bigger than a television set Apple has already made personal content consumption more prevalent than group consumption.
Sitting around the TV and watching a movie rarely happens anymore. Everyone in the family, each watches whatever they want on their phone, iPad, or laptop. Apple’s next great breakthrough doesn’t even have to be strictly media or tech. Perhaps it will be the iCar, some sort of iAutomation for your house, the iHome, who knows. Perhaps their plan is to start building luxury apartment buildings in San Francisco. Making spartan, but delightfully, designed homes built out of glass and aluminium.
Posted: May 30th, 2012 | Author: irv | Filed under: android, Apple, Companies, Facebook, Google, Microsoft, Verizon | Tags: android, Apple, Google, iOS, Microsoft, mobile, windows phone | No Comments »
This weekend I switched back, once again, to Windows Phone from my ICS packing Galaxy Nexus. Previously, I had switched to Windows Phone from my Froyo ( can you believe this phone was launched in the US with Froyo? ) Infuse 4G. I seem to always switch away from Android eventually, and I haven’t been sure why, until now. This is not meant to be an Android hate fest, I don’t want to say I hate iOS, and I don’t hate any OS. I am a huge fan of iOS and Android, nor am I a fanboy of any camp ( any more ).
I think that Android is an excellent implementation of the vision for which it was designed. iOS was the first and is still the leader in its category. Both of which are largely cut from the same cloth. Who copied whom, I’ll leave for history to decide. For the my purposes, however I am happier with Windows Phone, and I have finally figured out why.
Windows Phone is Designed Around Use Cases
As I was transitioning between my various Android handsets, my iPad, and my new Lumia 900, I kept thinking about what it was in Windows Phone that kept causing me to want to use it. The browser is merely sufficient, the hardware is technically behind the curve ( while the phone hardware as a package is superlative, hats off Nokia ), and the OS is, well… different. One of the core things, which was immediately apparent, was that it didn’t take long for me to get to what I wanted to do with the Lumia from the live tile home screen.
I don’t subscribe to the “Smoked by Windows Phone” campaign, I think that was stupid and wrong. Android is typically faster in specific areas, like time to app launch, etc… iOS smokes both of them in scrolling and touch screen responsiveness as well as time to app readiness on the newer iPad2/3 and iPhone 4S. Windows Phone’s speech to text is great, but not comprehensive; Android’s speech to text is better than Windows Phones, Siri’s voice recognition is marginally better than Androids, if only because of her witty retorts.
Despite all of the shortcomings I have just described, I still prefer Windows Phone. For a few months, after I started with Windows Phone 7 on my Focus S, I started to think something was wrong with me for liking it. Maybe I was a “feature phone” kind of guy after all. The tech media kept telling me that Android and iOS are better because of their broader app selection, more sophisticated chips, hardware, etc… I could readily agree with this assessment, after all, Windows Phone doesn’t have NBA Jam, or Angry Birds Space. The more I thought about it however, as far as I am concerned, I prefer to use my phone for communication first, and apps second. Being presented with a grid of apps, or strange widgets, or the wrong panel of the launcher were all in the way of simple communication.
When I use Windows Phone, it is clear, I press people for communication, me for updating my social networks, phone for calls. This simplicity, and clarity; that is what keeps drawing me back. It isn’t that Windows Phone is faster in any way than Android and iOS, not that it is slow. It is that each specific task that I want to do with the phone has a well defined path, is clearly encapsulated, and is a complete end-to-end experience with no cruft. It isn’t chaotic like the Android intent system, leading me all over the place from app-to-app, it isn’t ridiculously siloed like iOS. Things that should be combined, like Facebook and twitter are grouped together. Games are all in the same place, and share a coherent experience that is clearly differentiated from the other flows for when I want to play games. Music, podcasts, and audio are all together, unified in their Zune experience, which also is differentiated from the game flow, and the social flow.
Android and iOS are Designed Like Desktop/Tablet OS’s
Once I began to think about use-cases, I started to see how ill fitted Android and iOS were for the phone. I started to put devices into categories based on these use cases, to try to figure out where they go wrong.
When using my desktop / laptop, I am consciously sitting down to perform some fairly complicated task, I expect to have to make lots of decisions to perform that task, and I do not mind the complexity of the windowing system.
When using my tablet, I am typically settling down to enjoy some content, a game, a book, a fun diversionary app, or I am attempting to use a productivity app, for which I could perhaps perform the task on my desktop / laptop. I don’t mind actions taking a little extra time on my Tablet, I am expecting to explore and engage in an experience.
My phone is different. I am not typically trying to explore. I am trying to find a restaurant to eat at right now, or I am looking for my friends house and I am wandering around trying to read street numbers. I am buying something and need to compare prices. I am trying to call someone to have a conversation. In short, most of what I am doing with my phone is immediate I don’t want to browse.
The grid of apps, is really nice for presenting an experience, it is an invitation to browse, to wade into an entire universe of possibilities. A bunch of apps is great for when I want to spend time looking around, like window shopping. I don’t necessarily know what I want to do, I just want to be entertained.
I don’t really need apps on my phone, I need the workflows that are in those apps. I need the restaurant information inside of the Zagat application, I need the directions and augmented reality that is inside of google/bing maps. I need the social graph that is inside of Facebook to find out if my friends are busy this weekend. I need the content of the twitter app to find out what is going on right now. As far as exposing that, some apps for Windows Phone can do this with their live tile, for other, well designed Windows Phone apps, there is a clear use case for the application, and it brings as much content to me as it can to assist me with doing something right now.
Windows phone isn’t perfect, there are still quite a few missing use cases that I would like to see fleshed out, like the augmented reality directions, or a better workflow around photo sharing.
When you think about things in use cases, you actually start to see that the multitasking system that Windows Phone employes is correct. It is only broken if you are looking at it as you would look at Android or iOS, or if you are comparing your mobile computing environment to one that is less mobile. Windows Phone is better thought out than its competitors. Once you let go of the fact that you believe you want your smart phone to be just like your desktop/laptop/tablet, then everything will be fine.
So what if Windows Phone doesn’t have many quality apps, for most of the things I want to do, I am covered. As they add apps, so much the better, I only hope that the developers think about how their users will accomplish tasks in real-time with the applications they provide, and don’t fall back on the Android and iOS way of sticking a bunch of data into a silo and expecting the user to poke around to find it.
Windows 8, in its current incarnation is half-mistake, in my opinion. For the designers to take UI and a set of interactions that are successful for phone use cases, and apply them to a desktop OS is to turn something useful into a chaotic chimera. I believe that Microsoft is not allowing for as much richness and complexity as the interaction patterns of a stationary computing experience should provide by implementing the Metro interface on the desktop.
In the legacy interface, they are just screwing up what was working. It makes sense for them to take the same approach as they allowed the Windows Mobile team to take. Think about the use cases that people are likely to encounter when they are attempting to accomplish something with their desktop/tablets. You may not be able to unify the interfaces, it is OK. Apple is falling into the same trap, it is leaving a massive opening for someone to do something awesome with the desktop computer…. Canonical are you listening?
Let it go, the desktop paradigm is dead. Stop worrying about how things used to be and learn to experience Windows Phone for what it is. A beautiful breath of fresh-air, a new way of thinking about mobile interaction. Hopefully Microsoft doesn’t screw it up. If their marketing is any indication, I am worried about the future. If they leave the Windows Phone team alone, and allow them to keep doing what they are doing, things will be great.
Posted: April 19th, 2012 | Author: irv | Filed under: Amazon, Apple, Companies, Facebook, Google | Tags: amazon, analytics, bezos, big data, great, jeff, metrics, strategy, Web | No Comments »
For reasons unknown, it seems that the tech media completely fails to give Jeff Bezos and Amazon the recognition that they deserve. I believe that this is due to a deliberate strategy executed by Amazon to quietly grab as much mind and market share as they can. If they continue on their trajectory, they may become unassailable, in fact, they may be already.
There are blogs and podcasts called things like Apple Insider, This Week In Google, Mac Break Weekly, etc… I have yet to hear about any blogs or podcasts about what Amazon is doing week-in and week-out, but in many ways it is much more interesting. Amazon now handles 1% of consumer internet traffic, pushing all of through its near ubiquitous compute cloud infrastructure. They are rapidly and efficiently dismantling existing retail. Amazon is probably on their way to completely owning web commerce. Amazon has mass amounts of data on what people have, want, and will want based on what they own and buy. Through their mobile applications they are gathering pricing signals from competitors so that they can use their own cluster computing prowess to spot change pricing.
What is shocking about this is, despite their proficiency, no one discusses how absurdly dominant Amazon has become. Everyone just treats Amazon running all internet commerce and large swaths of its infrastructure as “the way it is.” Amazon is more a force of nature at this point than a company.
It isn’t just the tech media that doesn’t give them the credit they deserve, major tech companies aren’t either. Google and Apple seem ready to laugh off the Kindle Fire while Amazon soaks up more signals. Microsoft doesn’t even try to match them. Google’s commerce efforts look half-baked compared to what Amazon does, and they show no signs of trying to do better.
It is absurd to think that with the bitter rivalries we constantly hear about between Apple and Google, Microsoft and Google, Microsoft and Apple, etc… that someone would start a podcast about Amazon. Fifty years from now technology changes will have toppled Apple, Google, Facebook, and Microsoft, but I’d bet that Amazon would still be around.
Jeff Bezos and his company wield algorithms and data more effectively than anyone else in the industry, despite all the credit we give Google for search. Their suggestion and comment filtering algorithms are bar none, the best around. Amazon is integrated into the fabric of our lives and that is something that no other tech company has done to that level.
Amazon will keep doing what Amazon does best, being ruthless, being efficient, executing better than anyone else, and staying ahead of the curve. As long as we keep ignoring them they are doing their job. The greatest trick Amazon ever pulled was convincing the world that they didn’t exist. They have convinced the world that they are just retail.
Posted: March 12th, 2012 | Author: irv | Filed under: android, Apple, AT&T, Companies, Google, Verizon | Tags: FCC, free-market, mobile, policy, regulation, technology, wireless | No Comments »
Much of the tech media is thinking about the problem of wireless spectrum in the wrong way. It is helpful to think about why the commission was created, and the problems that have been created in the market by its existence.
The FCC/FRC was created in response to complaints by large national carriers that smaller regional broadcasters of dubious quality were broadcasting on all available frequencies with such power that the national broadcasters couldn’t get a clear signal through.
The government at the time thought it would be a good idea to control the broadcast frequencies so that they would have spectrum available for use with military equipment domestically.
So a deal of sorts was struck where in return for being regulated, national broadcasters would receive a monopoly on broadcast licenses. This would later cause the breakup of these same broadcasters.
The spectrum was allocated by frequency with buffer zones between blocks. Power output limits were also part of the regulation package, without which, the allocated frequency blocks would have been useless. This represented a modern and efficient set of controls based on the technology of the era.
Skip ahead a hundred years and see what has happened. The technology has improved, the regulatory framework is obviated by the technology. However, since we have an intruder in the market preventing normal forces from solving the problem, the same companies who were given the monopoly control which innovations consumers are allowed to buy. These same companies set, fix prices, and collude to gouge both content companies and consumers to the tune of ridiculous profits.
Let’s spin back to the past and examine what would likely have happened if the FRC and subsequently the FCC had never been created.
In the late thirties, the first experiments with digital technology were beginning to be performed. At the same time, content companies were beginning to have difficulty broadcasting because of band saturation on radio and later television frequencies.
The war had and was driving incredible advances in communications technology, including experiments with digital broadcast technology.
What Bell would have done was to seek profit from all of the major broadcasters of the era. They would have said, “hey, we have this cool way of using digital transmission to allow us to send tremendous amounts of data from different sources to different destinations through the same frequency using codes to distinguish one broadcaster from another. Why don’t you all pay us a small fee to register yourselves with us, and we’ll build a network of general broadcast / receivers throughout the country. We’ll use more frequencies as necessary, but since we can pack you in, we will always have plenty.”
This didn’t happen for many reasons, and has its own problems, but what would happen next would have been the rapid rise of bundlers. Companies whose job it was to use the spectrum they chose as efficiently as possible to maximize their own profit. It would have sped the pace of innovation and prevented the mess we have now.
Eventually the government would have stepped in and mandated that these bundlers reserve spectrum for military and government use. The bundlers would have complied.
Fast-forward this system to today. We would have hundreds of free market companies competing for the most efficient use of spectrum, with AT&T, T-Mobile, and hundreds if not thousands of competing Telecom and internet providers. The barrier to entry for these providers would be very low since they would just need to pay the bundler of their choice which best suited their needs.
These bundlers would have built powerful networks of broadcast / receivers everywhere, on roadsides, inside buildings, etc. There would be no lack of spectrum, and no need for excessive, heavy-handed regulation, as each advance in technology would allow the bundlers to use ever less spectrum and sell both the technology and license their spectrum to even more bundlers.
This didn’t happen, so where do we go from here? There are few places where it is so obvious that regulatory interference has caused irrational behavior in the market as in wireless. The FCC should embrace digital technology and require broadcasters to form independent corporations to act as the bundlers that I described. These bundlers would manage the infrastructure for the broadcasters with the broadcasters riding on them. Once the system was in place, the licensing system would be replaced by power output limits, and the FCC would assume a greatly reduced role.
There are many flaws to my proposal, but we have to get the FCC out of the spectrum licensing business. Technology is sufficiently advanced that we do not need this frequency based system. It is causing more harm than good. We need to let the market work to provide better access to everyone. It is critical that the barriers to entry for wireless carriers to be lowered if we want real competition and innovation in wireless going forward.
Posted: January 21st, 2012 | Author: irv | Filed under: AT&T, Companies, Facebook, Google, Management, Microsoft, Twitter | Tags: anti-trust, doj, facebook, Google, ibm, Media, Microsoft, social, twitter | No Comments »
When Google added world plus social, at first I didn’t think there was much of a problem. I understood that since Twitter and Facebook limit the ways in which Google interacted with them, it wasn’t really possible for Google to offer truly social search. This cabal between Facebook and Twitter is quite obviously hugely damaging to Google’s future interests as a company. So I also supported the need for Google Plus.
However, as I have been thinking about it, most companies in the past have gotten into trouble, become anti-competitive, or foes of the free market under the banner of simply looking out for their business interests in responding to a threat. Inside most potential monopolies, the issue that crops up after smashing a formidable challenge is when to stop.
Google is promoting G+ as the bulk of its social search, G+ is completely unavoidable as you are using the search engine. This puts Facebook and Twitter at something of a disadvantage. They also promote YouTube in a similar in-your-face manner, putting Vimeo and other web video companies at a disadvantage.
It isn’t hard to imagine a world in which startups don’t even look at web video because YouTube is un-assailable. Similarly one could imagine, though it is more of a stretch, that eventually Facebook and Twitter would whither and die at the hands of Google Plus since there is really only one search engine, and the entire world uses it. That world would be ridiculously anti-competitive, and no one, including Google really wants to see that.
I believe that if Google had had its just desserts, Facebook and twitter would have given it unfettered access to their data, and Google Plus would have been unnecessary. But since they didn’t G+ is more than beneficial for Google’s survival, it is essential. The same thing could be said about YouTube and Google Music in the face of iTunes.
One could argue as well that Google hasn’t been very effective of late at controlling what is going on within the company. Clearly there is a massive amount of resource contention, and a general challenge in keeping everyone on the same page, and playing for the same team. In addition, there is the kind of limited thinking that prevents the company from disrupting its own business units. Microsoft had(has) this problem, so did IBM, and so did AT&T.
AT&T, however operated like a well oiled machine, they had no problem crushing all competition and effectively responding to all challengers. Google is just as innovative as AT&T used to be, they will similarly get through their management issues, in fact I think they are very near this point. Google getting through their effectiveness issues however, is exactly what bothers me; Once they become as effective as AT&T used to be, isn’t that where the government steps in?
So what I propose instead is that Google break itself into separate businesses voluntarily. One of the main rules of business today is never to let a competitor, or government, disrupt you. It is better, and more profitable to disrupt yourself. I would suggest to Google, for this reason, that now is a good time to do it.
I would imagine that Google would become 5 corporations, split along the lines of social, media, search, mobile, and advertising. This would see Google Plus, Reader, Gmail, Google Talk and Google Docs become the Google Social business. Google docs may initially seem like a strange product to call social, but the purpose of Google Docs is to collaborate on work. That is pretty social as far as I’m concerned, in fact, it is probably the most social that people are in general.
The media business would consist of YouTube, Google Music, Google TV, and the nascent Google Games. The search business is self explanatory. Mobile would be Android, but also Motorola with the new purchase. And Google advertising would be their display, print, and television advertising business. Each company could retain a small portion of ownership of the other company that it was dependent upon. For example, Google media might maintain a 5% to 10% stake in Google social such that they can be sure that their requests are heard and honored. All of the business would have a small share of the advertising business, but the total should not add up to more than 40% so that the advertising business could remain autonomous.
The resulting companies would end up becoming far more competitive and profitable than their corresponding business units, due primarily to the need for providing open APIs to the other businesses that need their services. In the process, these businesses would make these APIs available to other startups who could build off of Google’s services as a platform, driving further profitability and end user lock in.
This would in turn surround their competitors, who are still just a simple silo, and who would begin to run into anti-trust concerns themselves. The now ridiculously nimble Google, which could be known as the Googles, would have them surrounded.
As a single entity Google is vulnerable to the same diseases which have, in the past, felled their erstwhile competitors. As multiple independent profitable companies, the Googles could remain dominant for decades. This would be better for the industry as a whole because each Google business with public APIs would provide a platform for numerous job creating profitable startups. C’mon Google, do what is right for the market, and for your business. Don’t wait for the DOJ to hold a gun to your head like AT&T. Even with the government forcing the issue with AT&T, being broken into the baby bells seems to have worked out pretty well for them.
Posted: July 30th, 2011 | Author: irv | Filed under: android, Apple, Companies, Google, iPhone, Lifestyle, Media | Tags: android, chromebook, Honeycomb, iOS, iPad, mobile, netbook | No Comments »
The tablet entered with a huge bang a few years ago. It was staggering, Apple sold in an incredible number of iPads and forced all of the netbook manufacturers and Google to scramble to produce and release a tablet OS, namely Honeycomb, that was arguably not ready for release.
The result with both the iOS and Honeycomb are two excellent tablet OSs, and Ice Cream Sandwich promises to be a stellar tablet and smartphone OS. What I have been discovering over the past year plus using both versions of the iPad and the Galaxy Tab 10.1 is that I don’t really need a tablet for general computing.
This is surprising to me. I built an IDE for the iPad and iPhone after all, and found myself using my own product more on the iPhone for quick edits than I did on the iPad.
I watch an awful lot of netflix on the iPad, and I play games most of the time that I am using it. I have found that with the Galaxy Tab, my patterns are much the same gaming, watching videos, occasionally reading ( although I still prefer my Kindle hardware to the tablet versions ).
So I am coming to the conclusion that the pundits were right initially, tablets are clearly for content consumption, not content creation. The reason, however that these devices are not suitable for content creation is worthy of debate, and is an issue that I’d like to take up now.
Natural User Interfaces
The user interaction that most tablets sport as the default is something that is being called a natural user interface, that is an interaction that uses some of the users other senses, such as motion, to perform an on screen action. The current crop of tablets mainly use touch instead of a dedicated hardware component to facilitate user interaction with the interface.
This lends itself obviously to gaming, and a “kick back” experience of sorts. The user can use touch, or the gyroscopes to control a character on the screen, this makes logical sense to just about any user.
As an example, many role playing games have a 3/4 view of the game board, that is, the camera is typically at 5 o’clock high, or somewhere thereabouts. The control scheme for most of these types of games is to touch a place on the screen to send the character to that location. Role playing games work particularly well on tablets for this reason, they are almost better with a touch interface than a controller.
As another example, car racing games use the accelerometer in the tablets to control an on-screen car. This works well, unless you are in a position in which your motion is constrained, such as the bed, most of these games provide some sort of alternate touch based interaction that replaces the accelerometer based input.
The problem with using on screen touch points in auto and first person shooter games is that the controller now covers part of the screen, or your hands end up covering important parts of the game world, causing the player to miss part of what is happening. I know that in my case, it takes away from the FPS experience and makes it so that I typically don’t buy those sorts of games on tablets, but instead prefer to play them on a console.
Natural user interfaces only work when the content is modified such that the user can interact with it sensibly using the available sensors, gyroscope, touch screen, microphone, et cetera. In a famously bad case of using a natural user interface to interact with content from a platform that uses traditional input, Numbers presents the user with a typical spreadsheet like the one you would find in Excel for your Mac or PC. The issue here is that Apple didn’t modify the presentation of the content such that it matches the platform. Arguably there is no way to do this in a form that makes sense.
The interface for Numbers features beautiful graphic design elements, and is generally pleasant, but when you tap on a grid element, a virtual keyboard pops up and you are invited to type into the fields. Apple has made a numeric keyboard interface which is pretty nice, but anytime you display the virtual keyboard, you haven’t thought hard enough about the problem. Displaying a grid of content is not useful on this device, it is amazingly useful on the desktop, but it just doesn’t work here. Inputting large amounts of data is frustrating, and the virtual keyboard makes mistakes all to common, either because of mistyping or the misguided autocorrect.
Modifying Content for the Natural Interface
Most of the people who are buying tablets today appear to be tolerating these issues, my belief is that they are doing this because tablet computers feel like a piece of the future they were promised when they were children, useful or not. Eventually, they will likely stop using their tablets at all in favor of ultralight laptop computers, or they will relegate the tablet to the living room table as a movie watching and game playing platform.
It is possible to make significant user input acceptable on a tablet, perhaps even pleasurable, by using a bit of creativity. First, they keyboard is a complete failure. It has its place, but in most cases it can be replaced by effective gesture (non touch ) and speech recognition. This is the only viable way for bringing large amounts of content.
On the visualization front, using our example in Numbers, perhaps a flat grid is not something that makes sense on the tablet, maybe we should send the data to a server for analysis and present it as a series of graphs that can be changed by the user, manipulating the graph directly with touch actions, or with spoken commands. The result of the changes would flow back into the spreadsheet, updating the numbers behind the visualization.
Many would argue that this would not be a rich enough interaction for some of the complex spreadsheets, pivot tables, etc… that they work with, indeed, it likely would not. Most of these users would not perform these actions on the tablet, instead they would use a MacBook Air, or other lightweight laptop computer. It takes a huge amount of creativity and intelligence, as well as significant amounts of computer power to manipulate data in this way.
Imagine a speech interface for a word processor that could use the camera to track your facial expressions to augment its speech accuracy. It could, and should, track your eyes to move the cursor and ask you to correct it when you make a bad face at a misinterpreted sentence. An application like this could make word processing on a tablet a wonderful experience.
The technology to do most of these things is here. It is either fragmented with each part patented by a different company, some without any sort of tablet such as the Microsoft with the Kinect. Or the effort to produce a piece of software to utilize the features of tablet computers to best effect is too great to justify the investment. For example, doing that sort of work for a word processor doesn’t make sense when people will just jump over to their laptop to use Word. Would anyone pay $100 up front for an iPad word processing application? I don’t think so. Would anyone pay $25 per month for the same application as a service on the iPad? Its equally doubtful.
What you come to eventually is that, for interacting with content that either naturally lends itself to, or can be easily modified for, the tablet, it is fantastic. Currently, however it is severely overpriced for how it is being used. After all, you can get a fairly cheap notebook that can play Netflix and casual games for $200, or 1/3 the price of most tablets. If you have to carry your laptop anyway, why would you have a tablet at all. Why wouldn’t you take the Air with you and leave your tablet at home. It can do everything the tablet can do, and it also can handle any of the content creation that you care to try.
Thinking about the situation, we need to find better business models that will allow for the development of applications that can handle the modifications to content that we need for tablets to be generally useful. This will take a while, and in the interim it is likely that some companies will produce tablet hybrids, the ASUS Eee Transformer is one tablet that comes to mind. It is very popular, runs a mobile tablet operating system, but becomes a keyboard wielding notebook in a second.
The Google Chromebook is another example of a lightweight, even in software, laptop that can do most of what a tablet can do, as well as most of what the typical laptop does. In my own use, excluding building applications for tablets, I always reach for my Chromebook instead of my tablets. All of this is excluding the huge difference in the difficulty of building applications on the platforms.
Writing applications for tablets is extremely hard with a doubtful return on investment, unless you are making a media or gaming title. While writing applications for the web is easy and potentially extremely lucrative with many variations on possible business models, and little interference from device manufacturers.
I am starting to think that Ray Ozzie was right when he said that Chrome OS was the future. It feels more like the near future than the iPad at this point. The tablet will always have its place, and perhaps with significant advances in natural user interface technology, with accordant price reductions it will start to take over from the laptop. I am fairly bullish on the natural user interface over the long term, but at the same time I pragmatically understand that we aren’t there yet. The devices, software, and consumers have a lot of work to do for us to really enter the era of the computerless computing experience. I am committed to getting there, but I think that the current crop of tablets might be a false start.