Today is a good day to code

Is The Raleigh-Durham Research Triangle the Next Hot Spot in Tech

Posted: August 12th, 2009 | Author: | Filed under: Uncategorized | Tags: , , , , | No Comments »

It seems that more often than not, when I am listening to a podcast, or reading about some cool new tech startup, it is in North Carolina, more often than not anymore.  I am wondering if we should rename it cloud hills North Carolina.  I started paying attention even before Apple and the governor of North Carolina acknowledged the 1 Billion dollar deal with Apple to locate a server farm there.  I have some pet theories as to why the area is hot and about to get hotter.

The first requirement for a thriving startup scene is access to capital.  Charlotte has that in droves, and is only a short “relatively” drive from RDU.  A few months ago, things there were touch and go with the financial meltdown.  Charlotte being arguably the second biggest banking city in the US after New York was looking at losing a significant number of businesses.  However, after some luck and quick work by the government, things seem to be looking up.  I would imagine that these banks are looking at trying to put their capital in something a bit more secure than financial derivatives.  Besides, what is better for PR than investing in small business and “putting America back to work.”

The next requirement is a thriving higher education community.  Between Charlotte, and the Research Triangle, there are a number of first class universities that have some of the most storied histories in the US.  Duke, UNC, Wake Forest, you name it.  There are plenty of hungry young minds looking for some VC and a little opportunity.

There needs to be solid infrastructure around broadband, electricity, and existing tech resources like hosting.  Well, they seem to have that, if Apple can build a huge server farm, and with what the universities, biotech companies, and bank tech consume as far as server resources are concerned.  There is skilled labor there, and know how.  Plus with companies like Red Hat out there, companies can find the services they need.  More importantly the state has shown that they have the governance to get it done.  The fact that they were able to change laws to meet Apple’s requirements to get the business shows that they are serious about tech.  The fact that they have working government puts them heads and shoulders above California because of that alone.

Not that I am eager to leave the Bay Area in any way shape or form at the moment, but North Carolina has my attention.  There is a lot going on outside of the microcosm of the Bay Area, and I think people are starting to take notice.


Google Blog Search Tool

Posted: December 31st, 1969 | Author: | Filed under: Google, Uncategorized | Tags: | No Comments »

Google Blog Search Tool

Picture of Irv Owens Web DeveloperGoogle recently launched a beta blog search tool. I have only been using it for one day, but so far I am impressed. I noticed that I was being crawled pretty heavily for the past few days and the Google blog search seems to have just about all of my pages. The Googlebot has been back to my site almost constantly since then.

The only questions that remain are how does Google rank blogs. Most of the time PR is a non-factor for blogs, which explains why most of the good blogs were so hard to find on most search engines. Many people don't link back to blogs, so links aren't always the most accurate indicator of popularity. No one seems to have an answer. Probably the best way to think of it is more along the lines of how does Google know that a site is a blog. It seems that it goes by the XML feed. Still, if it went by the XML feed alone, it would surely not have found so many of the pages of my site since I only include the most recent 10 or so blogs in my XML feed. It must have some patented criteria that it uses to separate blogs from non-blogs. I'm sure the SEOs have started their engines and are already working on ways to game the system. I guess they figure that if they can get a site onto the blog search they have a better chance of getting to the top. I'd say that they were right, although I'd wager that Google has a very good algorithm that is looking for fraud.

All in all, since I am a developer I find myself more often than not looking for information in blogs, so for me this new blog tool is very cool. I am not sure how it does this, but it determines blogs that may be of special interest to me based on my search and puts those at the very top. Maybe it uses the history to determine these. We'll find out soon. It is already better than IceRocket, and as soon as they add RSS / Atom agrigation, it may be better than anything else for a portal. We'll just have to wait and see.

Google Blog Search


New and Improved MSN Search

Posted: December 31st, 1969 | Author: | Filed under: Uncategorized | No Comments »

New and Improved MSN Search

Picture of Irv Owens Web DeveloperIt seems that MSN has roughly the same amount of pages indexed as Yahoo and Google, and yet on almost every search they return fewer pages than either of it's contemporaries. I have noticed that MSN's relevancy tends to be pretty good, however it is possible to customize your queries to an extent with MSN with their sliders so you can choose whether you want the most popular results, the most current, or the most relevant via keywords. Also, MSN's search engine is much faster than Yahoo, and a little faster than Google, however this could be due to there being much lighter traffic across Microsoft's servers. But I would attribute the performance to a combination of good programming, and ASP.net.

I don't particularly like ASP.net, mainly because of the lack of a solid framework like struts for java to use. I also don't really like VB syntax, although I have to admit, in version 7 it is greatly improved. But back to searching, if you search for fusebox in Yahoo, you get about 1.1 Million records returned. If you perform the same search in Google, you get about 215,000 records returned. I believe that Google has had a recent shakedown of it's index. In the MSN search you get about 245,000. In the Google results you get a lot of art studios, however in the MSN search you get articles about the fusebox framework almost exclusively in the first page of results. Yahoo gives you a mixed bag of results, seemingly alternating back and forth between the fusebox music site, and the fusebox web development framework. In my particular case I was thinking about the fusebox web development framework, but there is no real way for a search engine to know that.

Prior to this week, MSN's search results were pretty useless, I'm glad to see that Microsoft is working to do things a little differently. I notice that in my case I have back-links reported in MSN, but they are not listed in Yahoo and Google. Still, I tend to place higher in the SERPS on Yahoo and Google, and often I shouldn't. I think Microsoft is branching out and using different algorithms, instead of checking Google's results and altering their algorithm based on the adjustments to Google's index. That is lame and I think that more search engines should try new things.


Search Engine Optimization – The Google Sandbox

Posted: December 31st, 1969 | Author: | Filed under: Uncategorized | No Comments »

Search Engine Optimization – The Google Sandbox

Picture of Irv Owens Web DeveloperIt is pretty clear that Google has shifted its weight away from the page rank algorithm that has made it famous, and is now relying on some combination of other elements to determine where your site will come up on SERPs or search engine results placement. There are the standard methods of gaining higher SERPs, like getting rid of the question mark in the URL string, not using the id URL attribute, and making sure your pages have no HTML errors. Some of what I have been reading has indicated that internal linking is important as well. All of this effort to rank higher on the SERPs makes me think… Why isn't everyone working to make sure their site has good content to place higher.

It is suprising to me that it is easier to game the search engines than it is to fill your website with good content. For example, there are many sites that simply syndicate a more popular site's RSS feed on their web site in order to get more pages and therfore more text for which to be found. This is extremely annoying to me as I have often searched for a term only to find the first two pages of results full of the exact same content. To try to curb the intense gaming that has been going on, Google has seemingly admitted to employing a sandbox which penalizes sites whose SERPs, or rankings get too high too fast. While it seems like a fairly extreme measure, it is the only way to stop people from buying links from PR 7 sites in order to get higher rankings immediately. It is hard to tell what the duration of time a site has to spend in the sandbox is because it appears to vary. My guess is that if you gain to quickly, or get links from several highly ranked sites in a short interval, Google will place you in the sandbox and evaluate your site. If it is found that your site is full of good content, they will probably release you from the sandbox fairly sumarily, however if you are gaming them by copying someone else's feed, or have bought a bunch of links they will probably keep you in the sandbox indefinately. Of course the best way to achieve high rankings on Google is to get a single good link and keep updating your site with good content.


Google Should Revisit its Search Efficacy

Posted: December 31st, 1969 | Author: | Filed under: Uncategorized | No Comments »

Google Should Revisit its Search Efficacy

Picture of Irv Owens Web DeveloperWe all love Google's new cool features, but has anyone else noticed that Yahoo! and alltheweb seem to be returning better search results? I find myself more and more frequently having to wade through several pages of results to find the one I am looking for on Google, but using the exact same words to search alltheweb and Yahoo! almost always get me what I want on the first page.

Google uses a combination of page rank (importance), backlinks, and ultimately page content to determine how pages are placed when users search. When they first started, and there were fewer pages in their index, things were probably easier. But now that they have 8,058,044,651 and counting pages indexed it gets harder and harder to work the backlinks angle. Assuming they have enough horsepower to find every page that links to every other page, the linking would be so diluted it would be extremely difficult to use this technique to filter results. Blogging has made it even worse since now blogs link to other blogs, of which there are millions. Gaming is pretty easy, apparently people were intentionally linking the text 'miserable failure' to the online biography of George W Bush, and now when you search on the text 'miserable failure' on Google, that site comes up number 1 on Google. The problem comes in, I don't personally have a problem with that activity and I think its pretty funny, when someone releases a book called miserable failure, and someone tries to find the webpage for the book using Google.

The same issue crops up on all the search engines. Many of them have resorted to copying Google's method of searching to different degrees, or it could be that the gamers are just that good. Yahoo! and alltheweb both return the biography of GW Bush for 'miserable failure,' yet when I search for 'XHTML Programming,' on Google I get some tutorialfind site, which gives me a runtime error, while Yahoo! and alltheweb both return the w3schools site, which I find to be excellent, as their first result. And yet, the w3schools site is not in the top 10 results returned by Google. Why is the best search engine around serving up broken, or irrelevant results for a simple search on XHTML programming?

Other engines are still using the older model of actually looking at the page's text to determine relevance. The Google model is better for flash sites where there is no real text to search, but in real world use, I am looking for the site that contains some particular content, not the most popular site that contains that content. Some blogs are particularly informative, but since they are text heavy and link light, or really new, they don't rank so highly on Google, those same sites are up there on Yahoo! Alltheweb is a different beast, they don't have the traffic that Yahoo! and Google have, and they probably don't have the same number of sites indexed.

Maybe it is in Google's strategy to ignore their search results since they know it is impossible to completely thwart the gamers, perhaps they are doing a Wizard of OZ on all their users, 'pay no attention to that man behind the curtain.' Either way, I hope they change their focus back to providing great search results, because that is why they are number one. Without that, the features are pretty irrelevant.


Fulltext Searching in MySQL

Posted: December 31st, 1969 | Author: | Filed under: Uncategorized | No Comments »

Fulltext Searching in MySQL

Picture of Irv Owens Web DeveloperMySQL has been around for a while, and it still remains my favorite database development platform because even though I think I know it, new features are added that, though sometimes challenging, enable it to compete directly with Microsoft SQL Server. Right now at work, we are all using Microsoft SQL Server, though it seems that everytime I find a new feature in SQL Server, it is also available in MySQL. The feature that I have found that I love is the combinatinon of fulltext indexing and searching.

The documentation is pretty extensive in this area, so I won't copy too much of it here, but the gist of it is that fulltext searching will enable you to create an index via the

FULLTEXT (field1, field2)

command. It is important to remember to index the fields that you want to search at the same time or it won't work. I'll quote the documentation for the SQL.

SELECT id, field1, MATCH (field1,field2) AGAINST
('text to be searched') AS score
FROM table WHERE MATCH (table1,table2) AGAINST
('text to be searched')

This will generate a search for the 'text to be searched' in the fields field1,field2. The results will come back with the text in field1, and the results as score.

Fulltext searching in MySQL is reasonably quick, at least as fast as doing everything on the application server side. It has the added benefit of leaving your app server free to serve. The SQL is quirky in that you are casting your results in as score. The MATCH keyword will always return the results, so if you put it after your select keyword without adding any other fields, you will just get a column of numbers that indicate the relative strength of the matches.

The SQL search has a few more quirks, namely that there is a 50% rule that governs the relevance of the search string. For example, if you have a blog about Microsoft, and you do a search for Microsoft, you probably will get no results returned. This is because probably every article will have the word Microsoft included somewhere. Also, a space between words acts as an OR instead of an AND. This may confuse users. I have had some trouble getting the boolean searching to work with my host's MySQL 3.23 installation, but my 4.1 version seems to work fine. The search excludes any words under four letters in length. This is to limit the size of the index, as well as to get rid of words that won't be relevant anyway such as 'get.'

All in all, it is definately a useful tool, and can save a ton of work. If you have control of the DB server, then you can refine the search defaults to get the most accurate results for your data. You can check it out at Search Owens Performance Blog.

MySQL documentation for fultext searching


OpenSearch In IE 7 and Firefox 2

Posted: December 31st, 1969 | Author: | Filed under: Uncategorized | No Comments »

OpenSearch In IE 7 and Firefox 2

Picture of IrvinI've been playing around with trying to get OpenSearch going for a project of mine. I finally got it to work in IE 7 many hours after I got it to work in Firefox after checking out this MSDN IE 7 blog comment. Whoever game kid is, he/she is a godsend. Microsoft should definately go back and change the xmlns for OpenSearch in all posts regarding it, they would save developers hours of time.

At any rate, at the base level, due to the great flexibility earned by using xml namespaces within the OpenSearch response, whether text/html or application/rss+xml, you can easily aggregate search engine responses, which is exactly what Amazon's A9 search engine is now doing. It is cool. I like it that Mozilla didn't break the OpenSearch standard to add enhancements like suggest, although their JSON response of [“{search phrase}”, [“result 1″,”result 2″,”result 3”]] is indeed a little strange, but it didn't take me too long to figure it out.

OpenSearch is a great idea, and it is implemented decently by most modern browsers. In many cases, it saves a toolbar developer from having to write a toolbar, and in a lot of other ways, it helps in that it can force you to make a dynamic RSS feed, or internal search engine. Another reason it helps is that it can start the push inside companies toward a service oriented architecture. In many cases, this is sorely needed.

The only thing that I still wonder about is how many users truly use the OpenSearch box, and change their search provider? If Amazon hadn't beat me to it, I'd definately write a crawler / aggregator similar to dogpile for the OpenSearch XML descriptors. Another cool feature is that since it supports link rel style linking, it is possible to have a master OpenSearch feed, consisting of many other feeds in a company, helping to organize the internal data structures.

Its simple enough to engage even beginning developers, yet the implications are huge. I wonder how Google let Microsoft and Amazon fly past them on this one, but I guess no matter how smart you or your staff are, you can't think of everything.

Specifically:

The xmlns attribute for your tag should not be xmlns=”http://a9.com/-/spec/opensearchdescription/1.1/”, instead it should be xmlns=”http://a9.com/-/spec/opensearch/1.1/”. Again, thanks a ton gamekid, I'd throw you a link if you had left one!


Why The Current Crop of Social Networks Can Not Survive

Posted: March 5th, 2014 | Author: | Filed under: Companies, Facebook, Google | Tags: , , , , , | No Comments »

While the current group of social networks are extremely popular and appear wildly unassailable, it is my assertion that they are eminently fallable.  Over the past year, it has become obvious that they are having difficulty attracting users.  Facebook needs to grow by purchasing smaller companies for ever increasing amounts of money.  Google can’t really build a critical mass of users around it’s offering even though it is technically excellent.  Even with Twitter, the most non-social of the social networks, there is trouble attracting new users while implementing features to increase their revenue streams.

Arguably, one could claim that as everyone joins the network, there aren’t additional people to add, this is the theory of saturation.  While Facebook tries to assert that their daily active users is huge, and people are spending more time than ever logged in to facebook, how many of those actives are just people checking their messages?  Or have facebook set up on their mobile and are technically logged in all day.  This is a fantasy, people aren’t really using these services as much as they would have you believe.

Originally the promise of social networking was that people who we knew, and even more compelling, people that we don’t really know would create excellent and relevant content, thereby attracting even more people who would create great content.  This would create the virtuous cycle of content creation and given the user growth would make the platform lucrative in it’s advertising reach.  This is all known, what I believe everyone is currently ignoring in bubble, unicorn herd fashion, is that the cycle has been severely weakened, and their revenue models are broken.

Chicago: bubble house

What is ironic is that it has been weakened by the very thing that has made such scale on the internet viable: The desire of advertisers to pay for access to the social users.

It is a scenario that plays out in every market everywhere.  Initially someone produces something of value, and the market forms around that.  As the product evolves, the producer can easily see what works, as far as encouraging margin and price growth, and what doesn’t work, what causes price and margin decrease.

Most recently we have seen this occur in the PC market.  We are now to the point where vendors are completely optimizing on a single dimension, price.  Computer buyers ( obviously except those who purchase macs ) have spoken with their dollars, and their dollars want the best value for money.  Hence, netbooks, and $300 laptops loaded to the gills with shizware.  While it is shocking that people will accept this, this is what the consumer has chosen.

The social space works the same way.  The users of the social product, gmail, facebook, google search, etc… are the product, and the advertisers are the customer.  At first, users were drawn to the utility and functionality of the services.  In Facebook’s case, interestingly, the initial value proposition was that one could have a private relationship out of the view of the internet with their friends.  There was originally no danger that their content would appear to anyone for whom they had decided it shouldn’t.

As time has continued and these services have attained what they believe is a critical mass of users, the impetus for them to improve the service and protect their users’ privacy or to provide real value to the users diminished.  The incremental income for each new user was less than could be made by increasing the amount that the advertisers were willing to pay.  This has been accomplished either increasing, or inventing new areas in which to deliver ads, I.E. the facebook feed, paper, etc… In effect allowing the service to sell more inventory and spam their internal user base.  Or taking content that was originally private, but could be used to deliver ads, and making it public.  I won’t even start on the morality of the latter, but they have no choice, a free product at internet scale can not serve two masters, but it has to.

As anecdotal evidence, try to remember the last time you saw something interesting in the facebook feed, something that really grabbed you in a meaningful way. If you are like me, you can’t really ever remember anything really valuable that you’ve seen in the feed.

In actuality, the feed is built, designed, and optimized to deliver ads, not to deliver content of the highest quality to it’s users.  In fact, the deeper the quality content is buried, the more ads you have to wade through to find it, thereby increasing the services’ revenue.

What all of this has resulted in, is a number of once useful services, that have thoroughly optimized themselves to deliver ads, and have intrinsically lost their original value to the users.  This is what killed myspace, friendster, etc… This will ultimately kill Google ( albeit more slowly ), Facebook, and probably ultimately Twitter.

The reason the current model of social networks is untenable is that they are all designed around ads.  None of them, at least the “big successful” ones are designed around users paying, and optimizing around value for the paying user.  This will cause the end of the great social free ad-subsidized internet bubble at some point.

The reason I suggest that it will kill Google more slowly, if at all, is that Google obviously realizes that it’s current revenue model is untenable.  They are aggressively seeking out real value for money products to which they can transition when the ad revenue model dries up and the users flee their free online services.  People are just bored with these sites, there is nothing on them.

The same thing has happened to television.  The reason people are “cord cutting” is because bundling is designed to deliver advertising, not value to the TV services’ customers.  People aren’t stupid forever, eventually they realize they are being hornswaggled, basically paying twice.  Once in their monthly bill, the second time with their time.  It is just a matter of when.


Our Parents Built Voyager 1 What Are We Building?

Posted: September 14th, 2013 | Author: | Filed under: Uncategorized | Tags: , , , , , , | No Comments »
Voyager 1

Voyager 1

While I am incredibly excited that Voyager has left the solar system; a glorious accomplishment for humankind and an incredible testament to human ingenuity and will, we have built the first great ambassador to the stars, I wish that ‘we’ included myself and my generation.  Voyager is travelling at a velocity which is mind-boggling, it is moving at 11 miles per second, greater than the gravity of the sun holding our planets locked in their orbits around itself.  This is an incredible feat, man has created a device which can travel faster than anything captured within the gravity well of our solar system.  We have created something so rare, an object with a hyperbolic trajectory moving away from the solar system, the very fact that it is indicates the sophistication and potential of mankind.

Voyager Chief Scientist Ed Stone, Credit: NASA/Carla Cioffi

Voyager Chief Scientist Ed Stone, Credit: NASA/Carla Cioffi

Voyager, Apollo, Mariner, Hubble  and the great space exploration missions of the past century were all made possible by the genius of our parents.  As far as I can tell, we have done nothing even remotely approximating the magnitude of scientific discovery generated by these missions.  My generation has made it possible to cut deep into NASA’s budget, taking the money and wasting it on everything and nothing at once.  We take our greatest scientific minds and put them to work on wall street, creating exotic mathematical tools to extract ever declining slices of abstract value from the real value being created by people working.  We put mathematical geniuses to work creating codes and breaking codes so that we can perpetuate our advantage over everyone else.  What are we doing?  We have the wealth, ability and the technology available, not only to our government, but also our private citizens, to solve so many crippling issues around the world, while at the same time furthering human understanding of both biological and extra-terrestrial systems.  There is no reason we can not do both.  Any rationale given is just an excuse, and one I will not accept after watching the way we the people find money for other inane pet projects.

The derivatives that are created in the financial markets are true genius, the cryptographic systems and computer technologies that have been created over the past 26 or so years are astounding, and I truly appreciate that.  What I am frustrated and embarrassed about is what we choose to do with these technologies.  PRISM and Xkeyscore?  Really? Is that where we are putting all of our efforts, on spying on each other?  We talk about creating jobs; what part of building up infrastructure on the moon is not about creating new jobs?  The effort to build a permanent settlement on the moon would put tens of thousands of people around the world at work in a new race to inhabit our moon.

I am thrilled to see companies like Space X, Blue Origin, and Planetary Resources taking up the mantle of exploration.  We should be able to create another ambassador to the stars, like Voyager 1 & 2.  This should not be a hard sell.  We need to build another probe, one that is faster and has more storage capacity than the original Voyager missions.  We should create dozens of them some of them, some with highly solar-elliptical orbits such that they can deliver information about what is around us.

Only government currently has the resources to fund this type of purely scientific endeavor.  Profit seeking entities will eventually get there, but only where there is money to be made or with the will of a great individual with massive personal resources.   If the governments around the world lack enough vision to see that we need to do this, I’d settle for a league of extraordinary individuals who will put their fortunes and minds to this work.  Once these people come forward, all that needs to happen is for the governments of the world to get out of their way.

I understand that some say that the space exploration missions were really covert ( or not so covert ) tests for ICBM technology, and this is likely true, however the actual science missions got funded.  People used to be genuinely interested in exploring the unknown.  Naturally a few remain, but they are finding it more impossible by the day to convince any one else that exploration and research are still valuable.  People frequently discuss practical and pragmatic solutions to problems here on Earth, but they fail to find the vision that many of the problems here on Earth can be solved by trying to get off of it.

The next time you are thinking about trying to build the next facebook, or that next hot dating startup, of the next pinterest, take a minute and think about building a quantum entangled energy transmission network, or a new kind of rocket, perhaps a new form of energy efficient CO2 scrubbing system instead.  We need to seriously consider our legacy.  Do we, as a generation, only want to be remembered for our memes ( not that there is anything wrong with a good meme)?  Or do we want to do something big?  It is critical that we elect people who have vision, a true vision that will set us heading in the right direction.  We must support entrepreneurs who want to take us to the stars.  We do not have that with any of our elected officials, at least none that is willing to put their reputation and career on the line to forward space research that I am aware of anyway, and I am afraid that it is this which is the first and  largest problem.  The second is that we need to stop selling ourselves short.

We can do what seems like science fiction, it is possible.  Entrepreneurs should have the vision and the guts to take the risks that will advance all of mankind.  Venture capitalists need to ensure that they are taking long bets like the funding team behind Planetary Resources.  It is like this that we get there, if these companies do what they are doing and then work together to move us further forward, even beyond their initial business plans, eventually the governments will follow.


How Google Can Save Retail and Give Amazon a Black Eye in the Process

Posted: October 10th, 2012 | Author: | Filed under: Amazon, android, Apple, artificial intelligence, Companies, Google, iPhone, Lifestyle | Tags: , , , , , | 1 Comment »
Closed down retailer

Montgomery ward closed down

Looking at Google’s new maps inside view, it brings to mind a general problem with physical shopping vs online shopping.  With online shopping, I know exactly who has the item that I wish to buy, and I know what the price of that item is.  I can instantly perform comparison shopping without leaving the comfort of my home.  This convenience has a down side as well, when I do not know exactly what I want to buy and am just shopping for entertainment the online experience lacks substance.  It is much more fun to peruse best buy than it is to scroll down a page of picture of gadgets.  This is where Google can help.

One of the things that Google has done that has no clear immediate value to the company is to map the world in extreme detail, this has come to include the inside of stores.  Amazon does not have this capability.  In addition, Google has its hangout technology which, when leveraged with this inside indexing gives Google both a search index of the real world, and the ability to have a high-fidelity experience with an actual salesperson.

Imagine, Google indexes all of the shops in the world, coffee shops, hot dog stands, I mean everything along with real-time inventory of the items in search results.  Then they index those images using OpenCV or some other image recognition technology.  Alongside that, every retailer in the world assigns one or more salespeople inside of the shop to carry a tablet capable of performing a hangout.  Again this represents a giant biz-dev nightmare, but keep bearing with me.

Now comes the beautiful part, I, at home am surfing the web on my tablet when I get the itch to go shopping.  Instead of hopping into my car, I allow Google to suggest stuff that I might be interested in ( Amazon has a huge lead here, but Google will likely catch up due to their having more signals ).  While I’m looking through the suggestions, I see a watch that I am very interested in, so I click into it and it shows me a map of all of the places around me that have that watch.  I click again and ask for a horizontally swipable, inside view of the top 5 locations that have the watch.

I can actually browse the inside of the store, see the display with the watch in high resolution.  There will be a little place that I can click inside the store if I need help as in the watch is not on display, or the shop keeper will be notified that I am browsing.  At this point, the shop keeper can signal that they want to have a hangout with me in g+, or I can swipe to the next place at any time and browse that place.  If I do want to discuss the item in a hangout, I can either initiate or respond to an invitation from the shop keeper.  While on the hangout, the salesperson can express their craft, showing me alternate items, asking me to send data over, such as measurements, we could exchange documents, etc…

This future would be tremendous, and it is something that only Google can do.  But wait, there’s more!  Imagine that at this point with my Google Glasses, now I can have a full AR view with the details of each item coming up in my heads up display along with other shops’ more aggressive deals ( read ads ).  It would be ridiculously awesome!

Ultimately this will level the playing field with online as well as brick-and-mortar retailers, with the brick-and-mortar guys having a slight advantage until the online retailers start hiring sales reps for g+ hangouts or an equivalent technology.  I believe that this will bring a pretty large increase in the number of sales people employed and reverse the current employment drain that retail is experiencing.  It makes perfect sense as to why Amazon is trying to build out its mapping technology as quickly as possible.  It will be interesting to see who wins.