In the first part of this post, we saw how Google engineer Paul Buchheit’s 20% side project which led to the creation of Gmail had its origins in his itch to fix the buggy Web emails available in the market. He wanted to add a then unheard of 1 gigabyte of storage space so that users would never have to spend hours to sort and delete their mails. Ryan Tate’s book The 20% Doctrine says this was roughly five hundred times the storage space offered by competitors Hotmail.com and Yahoo! Mail. The question then was how to finance the expenses for this free extra storage space. Tate says Buchheit’s manager Marissa Mayer wanted him to charge users for the extra storage. But instead, Buchheit started looking at contextual advertisements like in the case of AdWords. AdWorks shows web searchers in Google, advertisements based on their search terms on the right hand side as well as on top of the search results page. For instance, if someone searched for ‘hotel’, advertisements for hotels would turn up. Buchheit wondered if the same logic could be extended to email. What if ads were shown on the side of emails based on the contents of the mail, he thought. It was brilliant on the face of it, but equally sounded creepy, says Tate. Mayer expressed her misgivings bluntly. “People are going to think there are people here reading their emails and picking out the ads, and it’s going to be terrible,” she recalled thinking in a Stanford University podcast done later. The podcast also recounted how Buchheit actually broke his promise to Mayer on not to work on combining advertisements with email. “I remember leaving, and when I walked out the door I stopped for a minute, and I remember I leaned back and I said, ‘So Paul, we agreed we are not exploring the whole ad thing now, right?’ And he was like, ‘Yup, right’.” Tate says Buchheit broke his word almost immediately. “Over the next few hours, he hacked together a prototype of the ‘ad thing’, a system that would read your email and automatically find a related ad to display next to it.” HE USED A PORN FILTER TO CREATE ADSENSE Tate also gives the details about how Buchheit went about creating the AdSense building blocks. Just as he adapted the Usenet search experience to create Gmail, he started working with another tool, a porn filter no less, to create AdSense. This was basically a code editor he had created to screen for adult content. Probably it was used to switch on and switch off Safe Search filters in Google Search Settings. “Normally, the filter examined a batch of known porn pages and listed words that occurred disproportionately within those pages. Other pages containing those words were then assumed to be porn. Buchheit instead turned the filter on Gmail messages, using the resulting keywords to select advertisements from Google’s AdWords database.” Tate advises youngsters who are following side projects to copy Buchheit’s method of adapting old work. “As tempting as it is start from a clean slate, always look for opportunities to use something old to create something fresh,” Tate advises. HOW BUCHHEIT WON OVER THE DECISION MAKERS Although Buchheit directly rebelled against his boss in putting together the delivery mechanism of what turned out to be AdSense, Tate says it helped that Google had a culture where results prevail over preconceptions. When the next day Marissa Mayer opened her Gmail account, only to see ads running on the side of mails, her immediate instinct was to summon Buchheit for an explanation. But she delayed action, thinking he deserved the mercy of sleeping for a few more hours after having worked the whole night. Tate writes, “While she waited, Mayer checked her Gmail. There was an email from a friend who invited her to go hiking — and next to it, an ad for hiking boots. Another email was about Al Gore coming to speak at Stanford University — and next to it was an ad for books about Al Gore. Just a couple of hours after the system had been invented, Mayer grudgingly admitted to herself, AdWords was already useful, entertaining, and relevant.” Tate writes that like Mayer, Larry Page and Sergei Brin loved AdSense. “In short order, the Google high command decided AdSense would be a top priority. It was a no-brainer: Google’s main revenue source, AdWords, placed contextual ads alongside search results. But search results were just 95% of Web views; AdSense promised to open up the other 95 percent to ads, since it could go inside any Web page,” Tate writes. According to Tate, it took just six months for AdSense to launch. In June 2003, it was made available to the public as a widget that any publisher could attach to any Web page. It generates more than $10 billion per year for Google. Gmail itself, for which AdSense was first developed by Buchheit, launched to the public on April 1, 2004, in what was initially thought of as an April Fool’s Day practical joke. Today it’s probably the world’s largest free Webmail service, as well as the pivot around which the Google Apps for Business suite functions. So what are the lessons which we can take from Buchheit’s innovations in the development of Gmail and AdSense for people who run 20% projects:
e.o.m.
0 Comments
Google’s ‘20 percent policy’ has been much celebrated. Although people are skeptical about the company’s commitment to the policy now, it still remains in force, though it was never a written-down document. This unwritten policy allows employees to spend up to one day a week or four days a month or 75 days a year on side projects which they want to pursue, using the company’s own resources. Many such projects later went on to become part of Google’s core offerings, including Gmail, AdSense, and Google News. Google engineer Paul Buchheit was the person responsible for the creation of both AdSense and Gmail. Both began as side projects by Buchheit. Today, AdSense is Google’s second biggest revenue earner after AdWords. Gmail is probably the biggest web-based email in the world. It was revolutionary when it was introduced. It still leads from the front, and is the pivot around which Google Apps for Business suite on the cloud is offered by the company. So successful and threatening did Google Apps for Business become for Microsoft’s core bread and butter Office suite that it was forced to offer a cloud version of the same in Office 365. This meant the Outlook email client had to be made available as a Web offering. So Microsoft ended up renaming Hotmail as Outlook. Look how a 20% project started by an engineer at Google ended up affecting even the company’s competitors! Much of what I am going to write here is taken from Ryan Tate’s book, The 20% Doctrine, where the first chapter chronicles Buchheit’s Operation Gmail and AdSense. BEGIN BY SOLVING A PERSONAL ITCH Tate says 20% side projects usually begin as an attempt to satisfy some personal itch. In the early years of the aughts, Buchheit’s itch was clearly email. Most of the popular Web-based offerings majorly sucked for him. For one, their storage capacity was minimal and users had to constantly work at trimming and deleting mails. Also, search capabilities were sadly lacking in email. Most providers didn’t have the knowhow to search mails for keywords appearing in the body of the text. Buchheit was conveniently placed. He had just finished fixing Google Groups, which was an archive of online conversations earlier known as Usenet. Buchheit’s fix involved making the archive searchable. He realized that email messages were fairly identical to messages in a board like Google Groups. The ‘To:’, ‘From:’, ‘Date:’, and ‘Subject:’ fields were shared, and the formatting rules were common as well. So Buchheit had an itch to solve, and he knew what to do. And it took him just a few hours to release the first version of Gmail. He shared it with a few colleagues, with code supporting only his own account. It would be good if the code supported our accounts too, they replied. And so, Gmail 2.0 was soon released, which supported search for users’ own email accounts. He followed the ‘release early and release often principle’ which is today the defining theme of the agile software development school. An early innovation was ‘Conversation View’, which displayed all replies to an email message as a unified thread. This prevented colleagues from talking past one another as was the practice before. “They would have to read all prior replies to an email before they could send one of their own,” says Tate. Very early, Gmail distinguished itself by its search capabilities. As mentioned before, this was an area where other email providers sucked. But Gmail quickly nailed comprehensive email searches. Another innovation by Buchheit was in the extensive use of JavaScript. It made Gmail feel like a desktop email client like Outlook, in contrast to other then Web emails like Hotmail.com. “For example, writing a message on Hotmail.com could easily require four page loads: one for ‘new message’, one to open your address book, one to search it, and one to pick a recipient. On Gmail, you clicked just once, and JavaScript generated the blank message form rightaway. If you started to type a friend’s name, Gmail would offer to autocomplete his email address. This felt like magic,” Tate writes in his book. HOW TO SUCCEED WITH A 20% PROJECT, THE BUCCHEIT WAY Tate writes that to succeed with a 20% project, “the trick is to find a way to make a small initial prototype and then take small steps forward”. Tech start-ups refer to this as the Minimum Viable Product, notes Tate. “The sooner you release, the sooner you get information from your users about where the product should go,” writes Tate. For instance, the Gmail churn was so intense that the front-end was rewritten about six times and the back-end about three times. The next concern for a 20% project developer is to know when to stop. As in, when do you consider your project sufficiently developed that you are ready to ship? Buchheit took to heart the advice from then Google CEO Eric Schmidt that he should launch only after getting 100 happy users for Gmail inside Google. Buchheit later said that he and his team would approach people directly for their feedback, and if the bar was set too high, they would abandon that user saying they were unlikely to ever satisfy him. But in short order, they won over 100 happy users by making small tweaks to the code based on user feedback. Humility is an important quality to have for such a project developer. Tate quotes Chris Wetherell, the lead developer of another 20% project called Google Reader (now shut down) as saying about the Gmail project, “Can you imagine working on it for two years? No daylight. Very little feeback. Many iterations, many. Some so bad that people thought, ‘This will never launch. This is the worst thing ever.’ I remember being in a meeting, and a founding member of Google said, ‘This is brand destroying. This will destroy our brand. This will crush our company’.” But Buchheit never gave up even after such withering criticism from within the company. In the next part, we will take up how he created AdSense as a way of monetizing Gmail since it came with a till-then unheard of one gig of free storage for users. There are many lessons to be learned for innovators and entrepreneurs from understanding the strategies used by Paul Buchheit in getting the buy-in from the company’s top leadership to invest its best resources in both Gmail and AdSense. eo.m. Google’s translation service is in the news in India now for the wrong reasons. Apparently, the Union Public Service Commission (UPSC), which conducts the civil services examinations, uses the Google Translate free service to translate most of the questions in the Civil Service Aptitude Test or CSAT for the preliminary exam. Many exam takers blame the poor Hindi-to-English translation for making CSAT insurmountable for them.
Obviously, UPSC needs to fix the translation part. It could consider using the services of professional translators, instead of an algorithm-based service like that of Google. But having said that, one has to note that on the whole, Google has considerably improved on the translation front from where it began. Randall Stross in his book Planet Google has provided a fascinating account of how Google nailed the machine translation problem which has been a bugbear element in computing for long. Stross begins by saying that machine translation in computing has a long tradition of overpromising and underdelivering. Considering Cold War priorities, Russian-to-English translation of documents was the initial area of focus for researchers. But word-for-word matching had its limitations, including the famous ‘water goat’ problem, a reference to how computers frequently translated the word hydraulic ram. Researchers thought all they had to do was add syntactical rules to word-for-word matching and perfect the process until translation was fixed. It certainly improved the quality of translations, and soon commercial providers of such translation services, including Systran, began entering the field. But Stross notes that this rules-based methodology was only one approach to machine translation. An alternative approach was advanced by researchers at IBM in the 1970s known as the Statistical Machine Translation. It was not based on linguistic rules manually drawn up by humans, but on a translation model that the software develops on its own as it is fed millions of paired documents —an original and a translation done by a human translator. GOOGLE MADE USE OF IBM RESEARCH Historically, IBM is known as a company with such a vast bureaucracy that many divisions do not know the findings and research advances of other divisions in the same organization. It often falls on others to make the most of the research advances made at IBM. For instance, Oracle was formed after Larry Ellison was alerted to the potential of an obscure research paper published at IBM about relational databases. Google made its tentative foray into translations in 2003 by hiring a small group of researchers and letting them free to have a go at fixing the problem. As is to be expected, they soon saw the potential of Statistical Machine Translation. In this model, says Stross, “the software looks for patterns, comparing the words and phrases, beginning with the first sentence in the first page of Language A, and its corresponding sentence in Language B. Nothing much can be deduced by comparing a single pair of documents. But compare millions of paired documents, and highly predictable patterns can be discerned…” So the task before the Google translators was one of scale. To fix the translation problem, they needed millions of paired documents. Stross says the Google engineers solved it by getting them a corpus of 200 billion words from the United Nations, where every speech made in the General Assembly as well as every document made, is translated into five other languages. “The results were revelatory,” says Stross. “Without being able to read Chinese characters or Arabic script, without knowing anything at all about Chinese or Arabic morphology, semantics, or syntax, Google’s English-language programmers came up with a self-teaching algorithm that could produce accurate, and sometimes astoundingly fluid, translations.” Google soon went to town with its achievement. At a briefing in May 2005, it held two translations of a headline in an Arabic newspaper side by side — its own as well as that of Systran. The first translation by Systran read as ‘Apline white new presence tape registered for coffee confirms Laden’. It was sheer nonsense. The Google translation rendered it as ‘The White House confirmed the existence of a new Bin Laden Tape’. Pretty impressive! Google didn’t stop there. It entered its translation service at the annual competition for machine-translation software run by the National Institute of Standards and Technology in the United States. Google came first in both Arabic-to-English and Chinese-to-English leaving Systran far behind. Google repeated its feat in 2006, coming first in Arabic and second in Chinese. Stross says a stupefied Dimitris Sabatakakis, the CEO of Systran, could not grasp how Google’s statistical approach could outsmart his company, which was in the machine translation business since 1968, and which had initially even powered the Google translation efforts. At Systran, “if we don’t have some Chinese guys, our system may contain some enormous mistakes”, he was quoted as saying. Stross says he could not understand how Google, without those Chinese speakers double-checking the translation, had beat Systran so soundly. Incidentally, Google hasn’t taken part in the competition since 2008 since it may have found that there’s nothing left to prove. FROM MONOLINGUAL TO BILINGUAL Stross’ description of how Google built up a monolingual language model is also a fascinating read. While in bilingual, translation happens from one language to another, in the monolingual language model the efforts are directed at using software to fluently rephrase whatever the translation model produced. In other words, this model perfected the language after it was already translated from another. How did Google manage this? Randall Stross has an answer. “The algorithm taught itself to recognize what was the natural phrasing in English by looking for patterns in large quantities of professionally written and edited documents. Google happened to have ready access to one such collection on its servers —the stories indexed by Google News.” Stross says that “even though Google News users were directed to the Web sites of news organizations, Google stored copies of the stories to feed its news algorithm. Serendipitiously, this repository of professionally polished text —50 billion words that Google had collected by April 2007 —was a handy training corpus perfectly suited to teach the machine translation algorithm how to render English smoothly.” So Google Translate may not be perfect. But it is constantly getting better, using software that teaches itself to read patterns by looking at a large volume of data. “Google did not claim to have the most sophisticated translation algorithms, but it did have something that other machine-translation teams lacked — the largest body of training data. As Franz Och, the engineer who led (and still leads) Google Translate said, “There’s a famous saying in the natural processing field, ‘More data is better data’.” Indeed. Data has helped Google to prevail as the leader in yet another segment of search. e.o.m. Remember the clickety-clack of the lowly typewriter? A generation has grown up probably without seeing one in action, but they are back in demand as tools for the ultimate search pros —the spies. Last week, the chairman of the German parliament’s intelligence committee set tongues wagging by recommending the use of manual typewriters for German spies to avoid digital information leaks. The immediate provocation was the arrest of a German spy who was doing dirty work for the Americans. In return for dollars, of course. The Germans aren’t the first spy agency to consider switching to typewriters. Last year the people in charge of Kremlin’s communications department decided to buy 20 electric typewriters to minimize the chances of information leaks. So the typewriter has its uses after all. At a time when they are all but extinct, it’s ironical they are in demand from the very spy masters who take much credit for their reliance on the latest gadgetry. Truly an old economy solution to a new economy problem. QWERTY VS. DVORAK LAYOUT It should be interesting to know that the current design of typewriter and computer keyboard layouts, popularly known as QWERTY because of the arrangement of letters in that order, has nothing to do with efficiency or logic. It deliberately increased the distance between the most frequently used letters to prevent the keys from clashing with each other and getting jammed. The man behind the QWERTY patent, Christopher Scholes, arranged the keys by putting the letters for often-typed English words in difficult-to-reach places, favouring the non-dominant left hand. The arrangement prevented the typewriter keys from getting entangled. Unfortunately, by way of habit, electronic typewriters and computers adopted the same keyboard even though jamming of keys is no longer a concern. A competing layout, called the Dvorak Simplified Keyboard (DSK) been in existence since 1936. It was designed by Dr. August Dvorak, a former education professor. It follows a few basic principles:
Numerous studies have proved that the Dvorak layout is more efficient. Many operating systems, including Windows, also provide the option to individual users to change their layout to Dvorak. But the dominance of QWERTY continues, even though the original reason for its lettering arrangement has long since ceased to matter. THE BANDWAGON EFFECT QWERTY still prevails because what Game Theorists call as the Bandwagon Effect. Whether good or bad, QWERTY usage has become a social convention. As Avinash Dixit and Barry Nalebuff explain in their The Art of Strategy, “The uncoordinated decisions of individuals keep us tied to QWERTY. It is the established system. Almost all keyboards use it. So we all learn it and are reluctant to learn a second layout. Keyboard manufacturers continue, therefore, with QWERTY. The vicious circle is complete.” After doing some number crunching, the authors conclude that if the fraction of typists using QWERTY falls below 72%, then there is the expectation that DSK will take over. “Fewer than 72% of new typists learn QWERTY, and the subsequent fall in its usage gives new typists an even greater incentive to learn the superior layout of DSK. Once all typists are using DSK, there is no reason for a new typist to learn QWERTY, and it will die out.” But they add a caveat. “The mathematics says only that we will end up at one of these two possible outcomes: everyone using DSK or 98% using QWERTY. It does not say which will occur. If we are starting from scratch, the odds are in favour of DSK being dominant. But we are not. History matters. The historical accident that led to QWERTY capturing nearly 100% of typists ends up being self-perpetuating…” Looks like QWERTY is here to stay. But whether the typewriter will make a second coming or not, only the spooks can tell. e.o.m. In our previous blog post, we had looked at how GoTo.com introduced the concepts of real-time auctioning of online ads and pay per click advertising. Google later adapted it and refined it further to create the AdWords behemoth, which today is the single largest source of revenue for the company. In the last financial year, advertising accounted for $50 billion of Google’s $55 billion annual revenues in 2013. We had also seen that the auctioning system built from the ground up by Eric Veach for Google charged only a penny more than the second highest bid for the AdWords auction winner for a given keyword. This was surprisingly similar to the Vickery Auction model used by the US Federal Reserve to auction government securities. William Vickery was an economist and Nobel laureate. Veach had created his real-time auctioning system without being aware of the Vickery model! Simply amazing. HOW THE VICKERY MODEL WORKS In the traditional Vickery auction, “all bids are placed in a sealed envelope. When the envelopes are opened to determine the winner, the highest bid wins. But there’s a twist. The winner doesn’t pay his or her bid. Instead, the winner only has to pay the second highest bid,” says Avinash Dixit and Barry Nalebuff in their celebrated Game Theory work, The Art of Strategy. Notice that this is slightly at variance with the real-time AdWords auctions, which takes place online, and where the winner has to pay only a penny more than the next highest bid. Dixit and Nalebuff explain using clear illustrations that in a Vickery auction, bidders have the ‘Dominant Strategy’ of bidding their true valuation. They go on to define Dominant Strategy as the best play of an auction participant, no matter what others are doing. English and Japanese auctions are two other formats which exist at slight variance to the Vickery Auction model. In the English Auction used by auctioneers like Sotheby’s or Christie’s, the auctioneer stands in a room calling out bids which increase at every call. The authors say that in an English auction, “a bidder learns something about what others think the item is worth by seeing some of their bids”. Their advice to bidders in an English Auction is that they have to bid until the price exceeds their ‘Value’, and then drop out. Dixit and Nalebuff define Value as the bidder’s “walkaway number…the highest price at which the bidder will still want to win the item”. In the Japanese Auction, “all the bidders start with their hands raised or buttons pressed. The bidding goes up via a clock. The clock might start at 30, and then proceed to 31, 32…and upwards. So long as your hand is raised, you are in the bidding. You drop out by lowering your hand. The catch is that once you lower your hand, you can’t put it back up again. The auction ends when only one bidder remains. “An advantage of the Japanese Auction is that it’s always clear how many bidders are active. In an English Auction, someone can remain silent even though they were willing to bid all along. The person can then make a surprise entry late in the contest.” So in the Japanese auction, everyone sees where everyone drops out. In contrast, the authors say, bidders in a Vickery Auction doesn’t get a chance to learn anything about the other bids until the auction is all over. Again, in both the Japanese and English auctions, what the winning bidder has to pay is the second highest bid amount. THE DUTCH AUCTION Google is also famous for using the Dutch auction to pick buyers for its shares when it first went for a stock market listing in 2004. Dixit and Nalebuff say that in the Dutch Auction, which is used to sell flowers in the Netherlands at places like Aalsmeer, the process is the reverse of the Japanese Auction. Here the auction starts with a high price that declines. “Imagine a clock that starts at 100 and then winds down to 99, 98…The first person to stop the clock wins the auction and pays the price at which the clock was stopped.” e.o.m. GoTo.com, the pioneer search engine founded by Bill Gross, which changed its name to Overture and was subsequently acquired by Yahoo! and shut down, has now reclaimed its original domain name and returned to familiar turf.
In interviews online, as well as on its Facebook Page, the team behind the new site has given enough indications that it’s a serious attempt to revive the brand. It has received $6 million in funding from VC firms, including the storied Draper Fischer Jurvetson. The 25-strong team is led by Jeffrey Brewer. Another name associated with the revived site is Joshua Metzger. Brewer was the CEO of Overture before it was acquired by Yahoo! for $1.6 billion in 2003. Metzger was SVP of Overture. So what we are seeing at GoTo.com now is a major comeback attempt by more or less the same team which was originally behind GoTo/Overture. GOTO REVOLUTIONISED SEARCH ADVERTISING GoTo.com was incubated by Bill Gross’s Idealab. It revolutionized the staid world of internet search engines by offering paid searches for the first time. Though initially considered distasteful, the model was later on adapted by Google and others. AdWords, which accounts for the bulk of revenues at search engine behemoth Google, traces its origins to the innovations brought about by Bill Gross and team. GoTo.com was the first to introduce the system of real-time auctioning of ads triggered by keyword searches. Gross and his team also were the first to introduce the concept of pay per click or PPC, wherein advertisers only paid for their ads when viewers clicked on them. Until then, internet advertising too was following the traditional print and television advertising metric of cost per thousand impressions (known as CPM). What made everyone sit up and take notice about GoTo.com was that its innovations raked in revenues in spades. In 2000, a mere two years after it was founded, GoTo boasted of $100 million in revenues. It went public soon after, while still in the red. The IPO brought in a billion dollars and plenty of market recognition. For its non-paid results, GoTo.com licensed the technology from Inktomi, a then popular search engine. GOTO FAILED TO PATENT ITS INNOVATIONS Through a management oversight, GoTo.com missed patenting both its real-time auction and pay per click innovations within the mandatory one-year window after commercial operations began. It ended up paying a heavy price for it. Google, an upstart search engine which wanted to index the world, was still groping for a revenue model when its founders were struck by the strides made by GoTo/Overture on the paid search front. According to the Google Story by David A Vise, Brin and Page though were turned off by certain GoTo.com practices like selling “guarantees that websites would be more frequently included in Web crawls, provided a business was willing to pay extra for it”. They also tweaked the bidding process for AdWords. Unlike GoTo.com, which ranked its ads based on how much the vendor paid, Google used a formula. Apart from how much someone was willing to pay, it also factored how frequently users clicked on the ad. “More popular ads drifted up. Less popular ones drifted down. So they trusted their users to rank the ads,” wrote Vise. In another crucial difference, while Overture made advertisers pay what they had bid, under the Google system devised by Eric Veach, the winner was charged only a penny more than the second-highest bid. Google innovated on this because it found that Overture’s system encouraged what was known as bid shading, writes Steven Levy in his book In the Plex. Because Overture made advertisers pay the amount they had bid, even if the next lowest bidder had offered significantly less, they always had an incentive to lower their bids in subsequent rounds, Google reasoned. The Google leadership also noted that a cottage industry of software providers had provided programs to automate bid shading on Overture. Incidentally, Veach’s auction scheme which he created from the ground up on his own, was later found to mirror the famed Vickery second-bid auction model adopted by the US Federal Reserve to auction government securities. A third tweak made by Google on the GoTo.com model was the introduction of the Quality Score. If your ad had a higher Quality Score, it got a better placement even when someone had bid higher. Levy observes that Quality score made the advertiser work hard to stay relevant. “You paid less if your ads were relevant. So you had a reason to work on your keyword, your text, your landing page, and generally improve your campaign.” Nevertheless, there’s no arguing that the innovations introduced by Bill Gross and his team at GoTo.com had provided a way out for the Google founders to monetize their search engine. Though Gross and team had missed the window to patent their innovations, they scrambled and patented everything else they could think of, basically a bunch of obscure things like the way they accepted bids. This came in handy when Overture later sued Google for patent infringement. Though the Google founders were determined to fight the suit, in the run-up to Google’s IPO when many things started going wrong on the PR front, the company’s VC backers forced Brin and Page into settling the Overture suit. In 2004, ahead of its IPO, Google gave Overture owner Yahoo 2.7 million shares to drop the litigation. SO WHAT’S NEXT FOR GOTO.COM? As we can see, GoTo.com has plenty of history to boast about. Though Google dominates the search arena, there’s still room for a nifty competitor to innovate since search is not a done thing. Will the GoTo.com team be lucky in their new avatar? In an interview to domainhioldings.com, Metzger said the team bought the GoTo.com domain in a private deal with the owner. He had this to offer on the future plans of the team: “We’re doing a fair amount of experimentation and testing in the area of search — let’s call it enhanced search — and it’s likely we’ll continue to do that for a while. You can check it out at the website, which has been resurrected as a simple search engine with the look and feel of the old GoTo.com.” A post on its Facebook Page also states that the team is focused on fixing search. Here’s wishing them all the best in becoming great at what they are trying to do. e.o.m. The late Steve Jobs has been credited with decisively influencing the development and course of several industries. More accurately, he even invented entire industries. But not much has been said about Job’s contribution to the creation of the World Wide Web.
Recently, based on a statement from World Wide Web inventor Tim Berners-Lee, some blogs tried to draw traffic with a post on how the concept of ‘inter-personal computing’ advocated by NeXT computer, a creation of Steve Jobs, inspired Berners-Lee. Truly a very tenuous link. But those who have researched the origins of the Web know that Jobs’ influence goes beyond a mere slogan. More than Berners-Lee, his colleague and co-founder of the Web, Robert Cailliau, has been forthright on the role played by the NeXT computer. As he recalled later, “Mike Sendall buys a NeXT cube for evaluation, and gives it to Tim. His prototype implementation on NeXTStep is made in the space of a few months, thanks to the qualities of the NeXTStep software development system. This prototype offers WYSIWYG browsing/authoring!” WYSIWYG refers to what you see is what you get, used to describe html editors which display content while editing in the exact format in which it would appear to the end-user. Astonishing to know that a WYSIWYG editor found a place in NeXT, which was released in 1988. Those of us who have taken the Internet History course at Coursera are familiar with a 1999 interview of Robert Cailliau by Prof. Charles Severance of the University of Michigan, which is available in YouTube (https://www.youtube.com/watch?v=x2GylLq59rI). In it, Cailliau explains at length how the NeXT computer (created by Steve Jobs after he was kicked out of Apple), which was then considered ahead of its time, provided the right environment for the creation of the Web:
To place things in context, the World Wide Web was made possible by the internet protocols already set up by the US defence establishment for electronic communication. Berners-Lee and Cailliau invented the Web as a system of connected hypertext documents accessed via the internet. CAILLIAU ENSURED THAT THE WEB PREVAILED OVER GOPHER Today, in retelling the story of the creation of the World Wide Web, one senses a tendency to diminish the role of Cailliau. He may have played a supporting role to Berners-Lee in the development of HTML, the first Web server, and the first Web browser, but in one crucial aspect, the Web owes its success to a far-reaching decision taken by Cailliau. It should be remembered that the Web was not the only protocol for distributing and retrieving documents over the internet. For a while at least, it had a serious competitor in Gopher. In fact, Mosaic, the first browser with graphical user interface, provided access to both Gopher as well as the Web. Many Gopher supporters considered it faster, efficient, and much more organised than the Web. Initially, its simplicity and ease of use ensured it was more popular than the Web. Why then did the Web end up as the dominant protocol? There are no easy answers. But some think that the announcement in 1993 by the University of Minnesota to charge licensing fees for use of the Gopher server spooked users and affected its adoption. Gopher, incidentally, was created by researchers at the university. In contrast, Cailliau, who worked with the legal service of CERN —where both he and Berners-Lee worked — played a role in persuading the institute to release the Web technology into the public domain the same year. This move proved decisive in the rapid worldwide adoption of the Web. So we salute Robert Cailliau for all his great contributions towards the advancement of human progress. And we also acknowledge that Jobs had indeed played an indirect role in the creation of the World Wide Web. The price per click (PPC) offering AdWords is Google’s most successful product to date and its largest revenue earner. Google’s revenues from advertising for the full year in 2013 were $50 billion. AdWords was initially started by aping the successful model of the then competing search engine GoTo. But the engineers at Google, under the directions of Larry and Sergey, quickly took it to another level with plenty of value-adds. In his book I’m Feeling Lucky, former Google consumer marketing head Douglas Edwards has given insights on how the founders settled on the name AdWords for their new offering.
IT SHOULDN’T SOUND FUNNY WHEN REPEATED FIVE TIMES Edwards says the new ad serving system began in late 2000 with the working title ‘AdsToo’, although no one wanted it to become the permanent name of the offering. Larry Page left the field open saying all suggestions are welcome except those which sounded funny when repeated five times fast. Omid Kordestani, the sales head, did not want a combination word which started with Google. Within this broad framework, suggestions started pouring in. ‘PrestoAds’ and ‘Self-serve Ads’were two names which earned the support of Salar Kamangar. Susan Wojcicki veered around to choosing ‘AdsDirect’ as the name for the offering. But they were not done yet. Edwards says he started off on a bad note by pushing for GIDYAP (Google Interactive Do-It-Yourself Ad Program) which was received with great derision. To retrieve lost ground, he had to come up with something better. So he tried ‘BuyWords’, a play between ‘bywords’ and ‘buy words’. It met with the approval of the sales team. And even Larry found it acceptable. But just when Edwards thought the matter was settled, he sensed that a new round of lobbying had just begun. So he went home that day and worked on a new set of names for about an hour. The next day morning, he sent out the fresh list of possibilities: Promote Control Ad-O-Mat Ad Commander Impulse Ads AdWords He liked the last one the best, and spent considerable time selling it. “It’s new, and improved. It’s like ‘BuyWords’ without the ‘Buy’,” he pleaded. Redemption at last. Salar liked it. Omid liked it. Larry liked it. Sergey cast the final vote. He told the engineering team that the new ad serving system would be called AdWords. And so it came to be. e.o.m. Serendipity, or happy chance finding, has been the basis for many inventions and discoveries. After all Christopher Columbus discovered the Americas when he was sailing in search of the wealth of India. Before him, Archimedes got the crucial insight regarding his principle while taking a bath, as the famous anecdote tells us.
Random searches on the Web often turn out to be adventures in happy chance findings. While search engines are investing extraordinary resources to make search results more accurate, to bring to the searcher a result closer to exactly what he had in mind, the internet often reveals nuggets of information when searchers start following links at random. So never discount the power of serendipity since many inventions which upended established ways of thinking and acting had their origins in happy chance findings. SERENDIPITY HELPED MIYAMOTO OVERTURN INDUSTRY DOGMA Let’s take an example from the world of gaming. Shigeru Miyamoto is a legend in the world of video gaming. Starting from Donkey Kong to Super Mario Bros, and The Legend of Zelda, Miyamoto had re-defined video gaming and made Nintendo a major force in the industry. But by the early years of the aughts, Miyamoto was considered past his prime, and the creator of some of the most critically acclaimed and successful games and franchises of all time was regarding by some as past his best. The gaming industry itself had by then switched on to a period when power, and the intensity of the graphics embedded in the consoles, were considered everything. As Joshua Cooper Ramo narrates in his book The Age of the Unthinkable, “Sony and Microsoft spent hundreds of millions of dollars to custom-build graphics processors capable of performing several trillion calculations per second. These chips were so expensive that Sony and Microsoft lost money on each gaming console.” What’s more, both the companies, which were by then leading their market with their PlayStation and Xbox consoles respectively, started investing millions more in new chips and hardware, anticipating the arrival of hi-definition television and more intense power computing. But at his lab in Kyoto, Miyamoto remained unimpressed. “Too many powerful consoles can’t co-exist,” he had concluded. “It’s like having only ferocious dinosaurs. They might fight and hasten their own extinction.” When word leaked about Miyamoto’s shift in thinking, the gaming press started bombarding him with questions. All he would say was a cryptic comment: “We are kind of in a strange period where power is the crux of whether something is going to be successful,” he said. “That seems a little bit odd. If we rely solely on the power of the console to dictate what we are going with games, I think that tends to suppress the creativity of the designers.” Enough said. Many of the listeners thought the gaming veteran had finally lost his grip on the industry. Wii SCORED WITH TECHNOLOGY SOURCED FROM AIR-BAGS As it happened, Miyamoto had the last laugh. Nintendo’s new console called the Wii, released in 2006, used graphics technology two generations behind PS3 and Xbox 360. But it just came out of nowhere to turn the industry upside down. What distinguished it was its motion control element which transformed gaming, till then an experience enjoyed by couch potatoes, into a much more physical sport. As Ramo says in his book, “In homes in Japan, the United States, and Europe, owners cleared space in front of their TVs, pushed their couches out of the way instead of sitting on them, then jumped, crawled and flailed around with their Wiimotes. Wii killed the idea that a video game was something you played without breaking a sweat.” Powering all this action was an innovative chip inside the Nintendo Wii. It had rather interesting origins. Ramo says, “It hadn’t come from some geek-stuffed gaming chip design house. It had come, instead, from inside an automobile air-bag system, very similar to what you have in your car. The chip was a small silicon tab called accelerometer, a breakthrough device that could measure the most minute changes in direction and speed. In your car, the chip is programmed to notice the sort of changes that could be associated with an accident — sudden jerks, wild skids, the instant snap of collision. When it senses these radical changes, it fires off the air-bags in a carefully planned sequence. But the best of these chips, the most advanced, could measure smaller and more nuanced movements.” Miyamoto was struck by its possibilities. How about combining these chips with the hand-held controllers of video game consoles, he wondered? Nintendo, ever ready to invest resources into transforming Miyamoto’s every new idea into reality, worked hard at it. Still it took them four years to get the accelerometer to work in a gaming console. Nintendo’ software engineers developed new ways to translate human movement into virtual action. In the end, the efforts were worth it. Sony and Microsoft were soon forced to play catch-up with their PlayStation Move and Kinect, respectively. Gaming was never the same after Wii. So who says random searches and enquiries have no value? e.o.m. Internal schisms at The New York Times over the firing of its executive editor Jill Abramson has led to the leakage of a crucial document on Digital Innovation. The document, which was prepared by a high profile team after months of interviews and research, is an invaluable tool kit for the arsenal of anyone interested in Digital Media. This is the concluding part on this series on the insights from the document
HOW THE US DIGITAL MEDIA IS FINDING ITS AUDIENCE In print, if the story gets printed, it gets an audience. In online, you have to go find that audience. At The New York Times, the report says, the story is done for writers and editors when they hit ‘Publish’. In contrast, at HuffingtonPost, the article begins its life when the author hits ‘Publish’. HuffPo expects all reporters and editors to be fully fluent in social media. A HuffPo story cannot be published unless it has a search headline, a photo, a tweet, and a Facebook post. The Guardian has a promotion team inside the newsroom. The Atlantic expects reporters to promote their own work and mine traffic numbers to look for best practices, the authors say.They note that many digital media organizations place a team in the newsroom to track the most popular stories in real time. The team helps the desk to draw more traffic. Other sites repackage unexpectedly poor performers and try to find them a new audience. For instance, Reuters has a two-member team to find up to seven hidden gems per day, which they then repackage and re-publish. According to Dan Colarusso, executive editor of Reuters Digital, “All web editors engage on social, and are also tasked with finding related communities and seeding their content.” At Circa, the document notes, each article is divided into atoms of news such as facts, quotes, and statistics. The Washington Post will look at data in real time to track which stories are drawing readers from Twitter, and then they show those same stories to other people who visit from Twitter. The NYT team also found that competitors treat platform innovation as a core function. Buzzfeed spent years investing in formats, analytics, optimization, and testing formats. These are Buzzfeed’s secret weapons:
The report goes to some length to describe the extraordinary efforts made by ProPublica to publicise its investigative journalism. At ProPublica, an editor meets with search, social and PR specialists to develop a promotional strategy for every story. And reporters are expected to submit five tweets along with each story they file. Specific strategies are identified for each story ahead of its publication. 1. An expert is identified to focus on ways to boost a story on Search through headlines, links, and other tactics. 2. A Social Editor decides which platforms are best for the story, and then finds influential people to help spread the word. 3. A Marketer reaches through phone calls or emails to other media outlets, as well as organizations that are interested in the topic. 4. The Story Editor ensures journalism is being promoted appropriately. 5. A Data Analyst evaluates the impact of the promotion. (The series concludes here. Earlier installments in this series can be found at the following links: Part 3, Part 2, Part 1) |
Archives
December 2014
AuthorI'm Georgy S. Thomas, the chief SEO architect of SEOsamraat. The Searchable site will track interesting developments in the world of Search Engine Optimization, both in India as well as abroad. Categories
All
|