Posted: October 10th, 2012 | Author: irv | Filed under: Amazon, android, Apple, artificial intelligence, Companies, Google, iPhone, Lifestyle | Tags: amazon, Electeonics, Google, Retail, Shopping, Stores | 1 Comment »
Montgomery ward closed down
Looking at Google’s new maps inside view, it brings to mind a general problem with physical shopping vs online shopping. With online shopping, I know exactly who has the item that I wish to buy, and I know what the price of that item is. I can instantly perform comparison shopping without leaving the comfort of my home. This convenience has a down side as well, when I do not know exactly what I want to buy and am just shopping for entertainment the online experience lacks substance. It is much more fun to peruse best buy than it is to scroll down a page of picture of gadgets. This is where Google can help.
One of the things that Google has done that has no clear immediate value to the company is to map the world in extreme detail, this has come to include the inside of stores. Amazon does not have this capability. In addition, Google has its hangout technology which, when leveraged with this inside indexing gives Google both a search index of the real world, and the ability to have a high-fidelity experience with an actual salesperson.
Imagine, Google indexes all of the shops in the world, coffee shops, hot dog stands, I mean everything along with real-time inventory of the items in search results. Then they index those images using OpenCV or some other image recognition technology. Alongside that, every retailer in the world assigns one or more salespeople inside of the shop to carry a tablet capable of performing a hangout. Again this represents a giant biz-dev nightmare, but keep bearing with me.
Now comes the beautiful part, I, at home am surfing the web on my tablet when I get the itch to go shopping. Instead of hopping into my car, I allow Google to suggest stuff that I might be interested in ( Amazon has a huge lead here, but Google will likely catch up due to their having more signals ). While I’m looking through the suggestions, I see a watch that I am very interested in, so I click into it and it shows me a map of all of the places around me that have that watch. I click again and ask for a horizontally swipable, inside view of the top 5 locations that have the watch.
I can actually browse the inside of the store, see the display with the watch in high resolution. There will be a little place that I can click inside the store if I need help as in the watch is not on display, or the shop keeper will be notified that I am browsing. At this point, the shop keeper can signal that they want to have a hangout with me in g+, or I can swipe to the next place at any time and browse that place. If I do want to discuss the item in a hangout, I can either initiate or respond to an invitation from the shop keeper. While on the hangout, the salesperson can express their craft, showing me alternate items, asking me to send data over, such as measurements, we could exchange documents, etc…
This future would be tremendous, and it is something that only Google can do. But wait, there’s more! Imagine that at this point with my Google Glasses, now I can have a full AR view with the details of each item coming up in my heads up display along with other shops’ more aggressive deals ( read ads ). It would be ridiculously awesome!
Ultimately this will level the playing field with online as well as brick-and-mortar retailers, with the brick-and-mortar guys having a slight advantage until the online retailers start hiring sales reps for g+ hangouts or an equivalent technology. I believe that this will bring a pretty large increase in the number of sales people employed and reverse the current employment drain that retail is experiencing. It makes perfect sense as to why Amazon is trying to build out its mapping technology as quickly as possible. It will be interesting to see who wins.
While the garbochess engine is plenty strong used in the Nc3 Bb4 Chromebook chess game, I thought it would be interesting to look at adjusting the weighting mechanism by sucessful and unsuccessful outcomes.
The first thing I had to look at was how garbochess weights potential moves. This took me into the super interesting world of bitboards. A quick aside, I have been working on mapreduce for the past few weeks, so looking at early methods of dealing with big data ( chess has an estimated ~ 10120 ) legal moves, in order to successfully evaluate all of the possible moves for a given position, plus all of the possible counters, weight them and choose the best possible move given criteria certainly qualifies as big data.
Interestingly, the approach wasn’t the hadoop approach, the hardware to use such brute force methods wasn’t available, instead early chess programmers tried to filter out undesirable moves, or obvious bad moves, moves that had no clear advantage, etc… What they ended up with was a pretty manageable set of moves for a circa 2011 computer.
The way garbochess considers moves, it looks at mobility for a given piece, control of the center, if a capture is possible, what the point differential for a trade would be, etc… and assigns a score for each possible legal move, it then runs through it repeatedly re-scoring the set relative to the available moves, removing the lowest scored moves, etc… eventually coming up with the best possible move. What I wanted it to consider, was given that and the specific weights, mobility vs actual point value for a given piece, to use a markov chain for reinforcement learning to describe the entire process of a game and then rate each move with a weight enhancement upon endgame moves as being more important. Every time the machine takes an action that leads to a success, the heavier the bias on the scoring for a given action. Failure doesn’t automatically nullify the learning, but it definitely has an effect.
I need to do more work on weighting changes based on how in trouble the machine is, whether they have an advantage or not, etc… But for a start with machine learning in chess, I think it works.
Posted: March 9th, 2009 | Author: irv | Filed under: artificial intelligence | Tags: artificial intelligence, compete, computation, Google, Web | 2 Comments »
Ars Technica has no faith. They are already saying that Wolfram’s knowledge engine will fail based, I’d imagine on the complete and utter disaster that Cuil and other would be google challengers have been. Here’s why I think that the computation knowledge engine can be a success.
First of all, its Stephen Wolfram, who truly shouldn’t be underestimated. He is also not trying to say that it can cure cancer, really he isn’t saying what it can do, or what its ultimate goal is. Except to say that it is going to answer simple questions. I don’t understand why this is impossible. Technology is clearly accelerating at a near exponential rate. The same improvement in technology and science between 1997 and 2000, was probably accomplished by June 2002, and so on. If that is to be accepted, then you have to believe that at some point soon we should get to an intelligent system that can answer a simple question like what color is the sky. Not by looking it up in a database, but by actually reasoning out the answer.
I think that Ars isn’t giving these guys enough credit. I can’t wait to see what they have cooked up.