![]() Manual penalty is Google’s ultimate weapon to punish a website which it suspects to have indulged in bad behavior. Webmasters get the hint that their sites are facing a possible manual penalty when it completely or partially disappears from a Google search result. For instance, if you key in your website’s URL in the Google search field, and the search result does not return a single page from your site, you can be sure that you are facing a manual penalty. Sometimes only a sub-set of pages from a very large website are made to disappear from Google’s search index. Even then, regardless of whether the penalty is site-wide or partial, its effect would be pretty devastating on the business. CONFIRM, AND THEN TAKE ACTION If you suspect that your site has been penalized by Google, the first step to take would be to confirm that you are indeed facing the penalty. For this, log in to Google Webmaster Tools, and then click Search Traffic in the dashboard. Now click Manual Actions. If your site has been penalized, it would be listed here. Depending on the punishment, it can be ‘Site-wide’ or ‘Partial’. Further, there would be a mention of the Reason for the action, as well as a list of the parts affected under the Affects header. Click on Reason to find out why your site faced the penalty. It could be one of the following: Unnatural links to your site — impact links. This means Google suspects you could be buying links or collaborating in link schemes. If you think you have been punished wrongly, you should use the Reconsideration tool Unnatural links If you see this listed as the Reason, it means Google is seeing a pattern of manipulative links pointing to your site. As a result, it has taken manual spam action against your site. In this case, you should download the links to your site using Webmaster tools, and check the list for links that violate Google’s guidelines on linking. Once you’ve identified such links, contact the webmasters of such sites and request them politely to either remove their links, or add a “rel=nofollow” attribute. A good SEO practitioner can help you here. It’s possible that you are unable to remove some problem links. This is where you use a tool called Disavow links tool provided by Google. Once you have removed or disavowed the links, it’s time to use the Reconsideration tool and ask Google to index your pages once again. Hacked site If this is what you see under Reason, it means Google believes one or more of your pages have been hacked by third parties. Google suggests a series of steps to clean up your site. A good SEO practitioner can help you in this case. Once you are sure that your site has been cleaned up, it’s time to use the Reconsideration tool to ask Google to index your pages once again. Unnatural links from your site This message means Google suspects you are providing unnatural and manipulative links to others from your site as a beneficiary of some link-buying scheme. You should either remove these links or ensure that they don’t pass your link juice by using the “rel=nofollow” attribute. Once done, use the Reconsideration tool to ask Google to index your pages once again. Thin content with little or no added value If you see this message, it means Google has applied a manual spam action on your site because it thinks you have low quality pages of these types: · Pages with automatically generated content · Thin affiliate pages · Scraped content from other sources · Doorway pages Identify these pages and correct the violations. Once done, use the Reconsideration tool to ask Google to index your pages once again. Pure spam If you see this message, it means Google believes you are practising aggressive spamming. You have to then clean up the affected pages, and approach Google for reconsideration. User-generated spam If you see this message, it means has Google has identified that your site hosts pages with spam generated by site users. Such spam can exist in site forums, profile pages, or guest pages. Once again, identify such pages and do a clean-up. Then knock on Google to reconsider. Measures suggested by Google to prevent user-generated spam include the following: · Use a CAPTCHA system · Turn on comment moderation · Use the “nofollow” attribute · Add a ‘report spam’ feature · Disallow hyperlinks in the comment box · Block Google access to comment pages by using robots.txt file · Monitor your site for spammy pages by setting up Google Alerts for keywords linked to spam Cloaking and/or sneaky redirects If Google thinks your site shows one page to Google, and another to users, or redirecting users to a site different from what Google saw, you will get this message. To clean up, you have to use a tool called Fetch as Google. This enables you to compare pages as seen by Google, and by other users. If the two are different, start cleaning up the source of this. Once done, request Reconsideration using the tool meant for that. Hidden text and/or keyword stuffing If Google thinks some of your pages contain hidden text or keyword stuffing, you will see this message. You could take the help of a good SEO practitioner, or on your own review Webmaster Guidelines on Hidden text and Keyword stuffing. Use the Fetch as Google tool to find content visible to the Google crawler, but hidden from human visitors. Remove or re-style such text, once identified. Once done with the clean-up, use the Reconsideration tool to ask Google to index your pages once again. Spammy Free hosts This applies if you run a free web hosting service and a significant number of your users are indulging in spam. The recommended steps are to block creation of automated content, and monitor the service for abuse using simple, but effective tools like site:operator query or Google Alerts. There’s also a Safe Browsing Alert Tool for Network Administrators provided by Google. For other best practices, it’s best to contact a knowledgeable SEO practitioner. Spammy Structured Markup In case of the above message, it means Google thinks some of your HTML or CSS code may be using techniques outside its rich snippet guidelines. Webmasters use rich snippets to take advantage of Google’s offer to provide a rich summary of your page in the search engine results page. To fix this error, first refer to the guidelines. After clean-up, test using the Structured Data Testing Tool provided in the Webmaster tools. Once done, submit for reconsideration. THE IMPORTANCE OF GUIDELINES So there we have it, a summary of the most usual reasons given by Google for punishing your Website with the dreaded manual penalty. Some of the remedial measures listed above may require handholding from an experienced SEO practitioner. As you can see, search engine optimization is not all hot air. There are situations like being at the receiving end of the dreaded manual penalty when you would certainly need the services of an SEO firm. It’s foundational that as a website owner, you should be thorough with Google’s Webmaster Guidelines. In the instances given above, one finds again and again that the remedy suggested by Google refers to the Webmaster Guidelines. So make sure you have it well covered on that front because it could be the best insurance to ensure you don’t ever face manual penalty from Google.
0 Comments
![]() More nuggets from I’m Feeling Lucky to help us understand Sergey Brin better. On the relationship between Larry Page and Sergey Brin, Edwards quotes David Krane, then a senior member of the communications team: “Those guys had a communication channel that was very direct, very open. When there was tension, it was when they were fighting over data. They would be downright rude to each other, confidently dismissing ideas as stupid or naïve or calling each other bastards. But no one would pout.” THEY SHOULD BE PAYING US It didn’t take long for Douglas to realize that Larry and Sergey wanted Google ideally to churn out perfect products while incurring no expense. “That was impossible, of course, but it didn’t stop us from trying,” he writes. Douglas said he learned that for the founders, the starting point for any negotiation would be the position, “they should pay us”. To the founders, winning a deep discount wasn’t a victory, wrote Douglas. “It was an admission of a failure to get something for free.” Another core value at Google was to ‘never pay retail’. In the early months of 2000, this was thoroughly tested when an expanding Google wanted more space and Silicon Valley had nothing to offer at less than $8 per square foot. Despite his protestations, Google’s official office hunter George Salah was forced to put in a lowball offer of $6.45 per square foot by Larry and Sergey. The landlord screamed at George’s broker and raised the price to $8.25 per square feet, notes Edwards. As luck would have it for the Google founders, two weeks later the dotcom fever broke and the real estate market collapsed. Salah “sublet space in that same building for $3.5 per square foot, and a year later he leased a completely furnished building nearby for 45 cents a square foot”, writes Edwards. So the Google core value, after all, was based on sound logic. GOOGLE’S GREATEST CORPORATE EXPENSE At an all-hands meet one Friday, Sergey popped the question, “Do you know what’s our greatest corporate expense?” Douglas remembers that everyone wanted to answer that one. Answers came in thick and fast like ‘health insurance’, ‘salaries’, ‘servers’, ‘electricity’, and even ‘Charlie’s grocery bills’. Sergey wasn’t impressed. “No,” he said solemnly, shaking his head, “opportunity cost.” He explained that products the company wasn’t launching and deals they weren’t doing threatened its economic stability more than any single line item in the budget. (To be continued. Vignettes taken from the book I’m Feeling Lucky by Douglas Edwards. The first, second and third parts of this series can be accessed by clicking on the links) ![]() In the first part of this post, we saw how Google engineer Paul Buchheit’s 20% side project which led to the creation of Gmail had its origins in his itch to fix the buggy Web emails available in the market. He wanted to add a then unheard of 1 gigabyte of storage space so that users would never have to spend hours to sort and delete their mails. Ryan Tate’s book The 20% Doctrine says this was roughly five hundred times the storage space offered by competitors Hotmail.com and Yahoo! Mail. The question then was how to finance the expenses for this free extra storage space. Tate says Buchheit’s manager Marissa Mayer wanted him to charge users for the extra storage. But instead, Buchheit started looking at contextual advertisements like in the case of AdWords. AdWorks shows web searchers in Google, advertisements based on their search terms on the right hand side as well as on top of the search results page. For instance, if someone searched for ‘hotel’, advertisements for hotels would turn up. Buchheit wondered if the same logic could be extended to email. What if ads were shown on the side of emails based on the contents of the mail, he thought. It was brilliant on the face of it, but equally sounded creepy, says Tate. Mayer expressed her misgivings bluntly. “People are going to think there are people here reading their emails and picking out the ads, and it’s going to be terrible,” she recalled thinking in a Stanford University podcast done later. The podcast also recounted how Buchheit actually broke his promise to Mayer on not to work on combining advertisements with email. “I remember leaving, and when I walked out the door I stopped for a minute, and I remember I leaned back and I said, ‘So Paul, we agreed we are not exploring the whole ad thing now, right?’ And he was like, ‘Yup, right’.” Tate says Buchheit broke his word almost immediately. “Over the next few hours, he hacked together a prototype of the ‘ad thing’, a system that would read your email and automatically find a related ad to display next to it.” HE USED A PORN FILTER TO CREATE ADSENSE Tate also gives the details about how Buchheit went about creating the AdSense building blocks. Just as he adapted the Usenet search experience to create Gmail, he started working with another tool, a porn filter no less, to create AdSense. This was basically a code editor he had created to screen for adult content. Probably it was used to switch on and switch off Safe Search filters in Google Search Settings. “Normally, the filter examined a batch of known porn pages and listed words that occurred disproportionately within those pages. Other pages containing those words were then assumed to be porn. Buchheit instead turned the filter on Gmail messages, using the resulting keywords to select advertisements from Google’s AdWords database.” Tate advises youngsters who are following side projects to copy Buchheit’s method of adapting old work. “As tempting as it is start from a clean slate, always look for opportunities to use something old to create something fresh,” Tate advises. HOW BUCHHEIT WON OVER THE DECISION MAKERS Although Buchheit directly rebelled against his boss in putting together the delivery mechanism of what turned out to be AdSense, Tate says it helped that Google had a culture where results prevail over preconceptions. When the next day Marissa Mayer opened her Gmail account, only to see ads running on the side of mails, her immediate instinct was to summon Buchheit for an explanation. But she delayed action, thinking he deserved the mercy of sleeping for a few more hours after having worked the whole night. Tate writes, “While she waited, Mayer checked her Gmail. There was an email from a friend who invited her to go hiking — and next to it, an ad for hiking boots. Another email was about Al Gore coming to speak at Stanford University — and next to it was an ad for books about Al Gore. Just a couple of hours after the system had been invented, Mayer grudgingly admitted to herself, AdWords was already useful, entertaining, and relevant.” Tate writes that like Mayer, Larry Page and Sergei Brin loved AdSense. “In short order, the Google high command decided AdSense would be a top priority. It was a no-brainer: Google’s main revenue source, AdWords, placed contextual ads alongside search results. But search results were just 95% of Web views; AdSense promised to open up the other 95 percent to ads, since it could go inside any Web page,” Tate writes. According to Tate, it took just six months for AdSense to launch. In June 2003, it was made available to the public as a widget that any publisher could attach to any Web page. It generates more than $10 billion per year for Google. Gmail itself, for which AdSense was first developed by Buchheit, launched to the public on April 1, 2004, in what was initially thought of as an April Fool’s Day practical joke. Today it’s probably the world’s largest free Webmail service, as well as the pivot around which the Google Apps for Business suite functions. So what are the lessons which we can take from Buchheit’s innovations in the development of Gmail and AdSense for people who run 20% projects:
e.o.m. ![]() Google’s ‘20 percent policy’ has been much celebrated. Although people are skeptical about the company’s commitment to the policy now, it still remains in force, though it was never a written-down document. This unwritten policy allows employees to spend up to one day a week or four days a month or 75 days a year on side projects which they want to pursue, using the company’s own resources. Many such projects later went on to become part of Google’s core offerings, including Gmail, AdSense, and Google News. Google engineer Paul Buchheit was the person responsible for the creation of both AdSense and Gmail. Both began as side projects by Buchheit. Today, AdSense is Google’s second biggest revenue earner after AdWords. Gmail is probably the biggest web-based email in the world. It was revolutionary when it was introduced. It still leads from the front, and is the pivot around which Google Apps for Business suite on the cloud is offered by the company. So successful and threatening did Google Apps for Business become for Microsoft’s core bread and butter Office suite that it was forced to offer a cloud version of the same in Office 365. This meant the Outlook email client had to be made available as a Web offering. So Microsoft ended up renaming Hotmail as Outlook. Look how a 20% project started by an engineer at Google ended up affecting even the company’s competitors! Much of what I am going to write here is taken from Ryan Tate’s book, The 20% Doctrine, where the first chapter chronicles Buchheit’s Operation Gmail and AdSense. BEGIN BY SOLVING A PERSONAL ITCH Tate says 20% side projects usually begin as an attempt to satisfy some personal itch. In the early years of the aughts, Buchheit’s itch was clearly email. Most of the popular Web-based offerings majorly sucked for him. For one, their storage capacity was minimal and users had to constantly work at trimming and deleting mails. Also, search capabilities were sadly lacking in email. Most providers didn’t have the knowhow to search mails for keywords appearing in the body of the text. Buchheit was conveniently placed. He had just finished fixing Google Groups, which was an archive of online conversations earlier known as Usenet. Buchheit’s fix involved making the archive searchable. He realized that email messages were fairly identical to messages in a board like Google Groups. The ‘To:’, ‘From:’, ‘Date:’, and ‘Subject:’ fields were shared, and the formatting rules were common as well. So Buchheit had an itch to solve, and he knew what to do. And it took him just a few hours to release the first version of Gmail. He shared it with a few colleagues, with code supporting only his own account. It would be good if the code supported our accounts too, they replied. And so, Gmail 2.0 was soon released, which supported search for users’ own email accounts. He followed the ‘release early and release often principle’ which is today the defining theme of the agile software development school. An early innovation was ‘Conversation View’, which displayed all replies to an email message as a unified thread. This prevented colleagues from talking past one another as was the practice before. “They would have to read all prior replies to an email before they could send one of their own,” says Tate. Very early, Gmail distinguished itself by its search capabilities. As mentioned before, this was an area where other email providers sucked. But Gmail quickly nailed comprehensive email searches. Another innovation by Buchheit was in the extensive use of JavaScript. It made Gmail feel like a desktop email client like Outlook, in contrast to other then Web emails like Hotmail.com. “For example, writing a message on Hotmail.com could easily require four page loads: one for ‘new message’, one to open your address book, one to search it, and one to pick a recipient. On Gmail, you clicked just once, and JavaScript generated the blank message form rightaway. If you started to type a friend’s name, Gmail would offer to autocomplete his email address. This felt like magic,” Tate writes in his book. HOW TO SUCCEED WITH A 20% PROJECT, THE BUCCHEIT WAY Tate writes that to succeed with a 20% project, “the trick is to find a way to make a small initial prototype and then take small steps forward”. Tech start-ups refer to this as the Minimum Viable Product, notes Tate. “The sooner you release, the sooner you get information from your users about where the product should go,” writes Tate. For instance, the Gmail churn was so intense that the front-end was rewritten about six times and the back-end about three times. The next concern for a 20% project developer is to know when to stop. As in, when do you consider your project sufficiently developed that you are ready to ship? Buchheit took to heart the advice from then Google CEO Eric Schmidt that he should launch only after getting 100 happy users for Gmail inside Google. Buchheit later said that he and his team would approach people directly for their feedback, and if the bar was set too high, they would abandon that user saying they were unlikely to ever satisfy him. But in short order, they won over 100 happy users by making small tweaks to the code based on user feedback. Humility is an important quality to have for such a project developer. Tate quotes Chris Wetherell, the lead developer of another 20% project called Google Reader (now shut down) as saying about the Gmail project, “Can you imagine working on it for two years? No daylight. Very little feeback. Many iterations, many. Some so bad that people thought, ‘This will never launch. This is the worst thing ever.’ I remember being in a meeting, and a founding member of Google said, ‘This is brand destroying. This will destroy our brand. This will crush our company’.” But Buchheit never gave up even after such withering criticism from within the company. In the next part, we will take up how he created AdSense as a way of monetizing Gmail since it came with a till-then unheard of one gig of free storage for users. There are many lessons to be learned for innovators and entrepreneurs from understanding the strategies used by Paul Buchheit in getting the buy-in from the company’s top leadership to invest its best resources in both Gmail and AdSense. eo.m. Google’s translation service is in the news in India now for the wrong reasons. Apparently, the Union Public Service Commission (UPSC), which conducts the civil services examinations, uses the Google Translate free service to translate most of the questions in the Civil Service Aptitude Test or CSAT for the preliminary exam. Many exam takers blame the poor Hindi-to-English translation for making CSAT insurmountable for them.
Obviously, UPSC needs to fix the translation part. It could consider using the services of professional translators, instead of an algorithm-based service like that of Google. But having said that, one has to note that on the whole, Google has considerably improved on the translation front from where it began. Randall Stross in his book Planet Google has provided a fascinating account of how Google nailed the machine translation problem which has been a bugbear element in computing for long. Stross begins by saying that machine translation in computing has a long tradition of overpromising and underdelivering. Considering Cold War priorities, Russian-to-English translation of documents was the initial area of focus for researchers. But word-for-word matching had its limitations, including the famous ‘water goat’ problem, a reference to how computers frequently translated the word hydraulic ram. Researchers thought all they had to do was add syntactical rules to word-for-word matching and perfect the process until translation was fixed. It certainly improved the quality of translations, and soon commercial providers of such translation services, including Systran, began entering the field. But Stross notes that this rules-based methodology was only one approach to machine translation. An alternative approach was advanced by researchers at IBM in the 1970s known as the Statistical Machine Translation. It was not based on linguistic rules manually drawn up by humans, but on a translation model that the software develops on its own as it is fed millions of paired documents —an original and a translation done by a human translator. GOOGLE MADE USE OF IBM RESEARCH Historically, IBM is known as a company with such a vast bureaucracy that many divisions do not know the findings and research advances of other divisions in the same organization. It often falls on others to make the most of the research advances made at IBM. For instance, Oracle was formed after Larry Ellison was alerted to the potential of an obscure research paper published at IBM about relational databases. Google made its tentative foray into translations in 2003 by hiring a small group of researchers and letting them free to have a go at fixing the problem. As is to be expected, they soon saw the potential of Statistical Machine Translation. In this model, says Stross, “the software looks for patterns, comparing the words and phrases, beginning with the first sentence in the first page of Language A, and its corresponding sentence in Language B. Nothing much can be deduced by comparing a single pair of documents. But compare millions of paired documents, and highly predictable patterns can be discerned…” So the task before the Google translators was one of scale. To fix the translation problem, they needed millions of paired documents. Stross says the Google engineers solved it by getting them a corpus of 200 billion words from the United Nations, where every speech made in the General Assembly as well as every document made, is translated into five other languages. “The results were revelatory,” says Stross. “Without being able to read Chinese characters or Arabic script, without knowing anything at all about Chinese or Arabic morphology, semantics, or syntax, Google’s English-language programmers came up with a self-teaching algorithm that could produce accurate, and sometimes astoundingly fluid, translations.” Google soon went to town with its achievement. At a briefing in May 2005, it held two translations of a headline in an Arabic newspaper side by side — its own as well as that of Systran. The first translation by Systran read as ‘Apline white new presence tape registered for coffee confirms Laden’. It was sheer nonsense. The Google translation rendered it as ‘The White House confirmed the existence of a new Bin Laden Tape’. Pretty impressive! Google didn’t stop there. It entered its translation service at the annual competition for machine-translation software run by the National Institute of Standards and Technology in the United States. Google came first in both Arabic-to-English and Chinese-to-English leaving Systran far behind. Google repeated its feat in 2006, coming first in Arabic and second in Chinese. Stross says a stupefied Dimitris Sabatakakis, the CEO of Systran, could not grasp how Google’s statistical approach could outsmart his company, which was in the machine translation business since 1968, and which had initially even powered the Google translation efforts. At Systran, “if we don’t have some Chinese guys, our system may contain some enormous mistakes”, he was quoted as saying. Stross says he could not understand how Google, without those Chinese speakers double-checking the translation, had beat Systran so soundly. Incidentally, Google hasn’t taken part in the competition since 2008 since it may have found that there’s nothing left to prove. FROM MONOLINGUAL TO BILINGUAL Stross’ description of how Google built up a monolingual language model is also a fascinating read. While in bilingual, translation happens from one language to another, in the monolingual language model the efforts are directed at using software to fluently rephrase whatever the translation model produced. In other words, this model perfected the language after it was already translated from another. How did Google manage this? Randall Stross has an answer. “The algorithm taught itself to recognize what was the natural phrasing in English by looking for patterns in large quantities of professionally written and edited documents. Google happened to have ready access to one such collection on its servers —the stories indexed by Google News.” Stross says that “even though Google News users were directed to the Web sites of news organizations, Google stored copies of the stories to feed its news algorithm. Serendipitiously, this repository of professionally polished text —50 billion words that Google had collected by April 2007 —was a handy training corpus perfectly suited to teach the machine translation algorithm how to render English smoothly.” So Google Translate may not be perfect. But it is constantly getting better, using software that teaches itself to read patterns by looking at a large volume of data. “Google did not claim to have the most sophisticated translation algorithms, but it did have something that other machine-translation teams lacked — the largest body of training data. As Franz Och, the engineer who led (and still leads) Google Translate said, “There’s a famous saying in the natural processing field, ‘More data is better data’.” Indeed. Data has helped Google to prevail as the leader in yet another segment of search. e.o.m. ![]() In our previous blog post, we had looked at how GoTo.com introduced the concepts of real-time auctioning of online ads and pay per click advertising. Google later adapted it and refined it further to create the AdWords behemoth, which today is the single largest source of revenue for the company. In the last financial year, advertising accounted for $50 billion of Google’s $55 billion annual revenues in 2013. We had also seen that the auctioning system built from the ground up by Eric Veach for Google charged only a penny more than the second highest bid for the AdWords auction winner for a given keyword. This was surprisingly similar to the Vickery Auction model used by the US Federal Reserve to auction government securities. William Vickery was an economist and Nobel laureate. Veach had created his real-time auctioning system without being aware of the Vickery model! Simply amazing. HOW THE VICKERY MODEL WORKS In the traditional Vickery auction, “all bids are placed in a sealed envelope. When the envelopes are opened to determine the winner, the highest bid wins. But there’s a twist. The winner doesn’t pay his or her bid. Instead, the winner only has to pay the second highest bid,” says Avinash Dixit and Barry Nalebuff in their celebrated Game Theory work, The Art of Strategy. Notice that this is slightly at variance with the real-time AdWords auctions, which takes place online, and where the winner has to pay only a penny more than the next highest bid. Dixit and Nalebuff explain using clear illustrations that in a Vickery auction, bidders have the ‘Dominant Strategy’ of bidding their true valuation. They go on to define Dominant Strategy as the best play of an auction participant, no matter what others are doing. English and Japanese auctions are two other formats which exist at slight variance to the Vickery Auction model. In the English Auction used by auctioneers like Sotheby’s or Christie’s, the auctioneer stands in a room calling out bids which increase at every call. The authors say that in an English auction, “a bidder learns something about what others think the item is worth by seeing some of their bids”. Their advice to bidders in an English Auction is that they have to bid until the price exceeds their ‘Value’, and then drop out. Dixit and Nalebuff define Value as the bidder’s “walkaway number…the highest price at which the bidder will still want to win the item”. In the Japanese Auction, “all the bidders start with their hands raised or buttons pressed. The bidding goes up via a clock. The clock might start at 30, and then proceed to 31, 32…and upwards. So long as your hand is raised, you are in the bidding. You drop out by lowering your hand. The catch is that once you lower your hand, you can’t put it back up again. The auction ends when only one bidder remains. “An advantage of the Japanese Auction is that it’s always clear how many bidders are active. In an English Auction, someone can remain silent even though they were willing to bid all along. The person can then make a surprise entry late in the contest.” So in the Japanese auction, everyone sees where everyone drops out. In contrast, the authors say, bidders in a Vickery Auction doesn’t get a chance to learn anything about the other bids until the auction is all over. Again, in both the Japanese and English auctions, what the winning bidder has to pay is the second highest bid amount. THE DUTCH AUCTION Google is also famous for using the Dutch auction to pick buyers for its shares when it first went for a stock market listing in 2004. Dixit and Nalebuff say that in the Dutch Auction, which is used to sell flowers in the Netherlands at places like Aalsmeer, the process is the reverse of the Japanese Auction. Here the auction starts with a high price that declines. “Imagine a clock that starts at 100 and then winds down to 99, 98…The first person to stop the clock wins the auction and pays the price at which the clock was stopped.” e.o.m. The price per click (PPC) offering AdWords is Google’s most successful product to date and its largest revenue earner. Google’s revenues from advertising for the full year in 2013 were $50 billion. AdWords was initially started by aping the successful model of the then competing search engine GoTo. But the engineers at Google, under the directions of Larry and Sergey, quickly took it to another level with plenty of value-adds. In his book I’m Feeling Lucky, former Google consumer marketing head Douglas Edwards has given insights on how the founders settled on the name AdWords for their new offering.
IT SHOULDN’T SOUND FUNNY WHEN REPEATED FIVE TIMES Edwards says the new ad serving system began in late 2000 with the working title ‘AdsToo’, although no one wanted it to become the permanent name of the offering. Larry Page left the field open saying all suggestions are welcome except those which sounded funny when repeated five times fast. Omid Kordestani, the sales head, did not want a combination word which started with Google. Within this broad framework, suggestions started pouring in. ‘PrestoAds’ and ‘Self-serve Ads’were two names which earned the support of Salar Kamangar. Susan Wojcicki veered around to choosing ‘AdsDirect’ as the name for the offering. But they were not done yet. Edwards says he started off on a bad note by pushing for GIDYAP (Google Interactive Do-It-Yourself Ad Program) which was received with great derision. To retrieve lost ground, he had to come up with something better. So he tried ‘BuyWords’, a play between ‘bywords’ and ‘buy words’. It met with the approval of the sales team. And even Larry found it acceptable. But just when Edwards thought the matter was settled, he sensed that a new round of lobbying had just begun. So he went home that day and worked on a new set of names for about an hour. The next day morning, he sent out the fresh list of possibilities: Promote Control Ad-O-Mat Ad Commander Impulse Ads AdWords He liked the last one the best, and spent considerable time selling it. “It’s new, and improved. It’s like ‘BuyWords’ without the ‘Buy’,” he pleaded. Redemption at last. Salar liked it. Omid liked it. Larry liked it. Sergey cast the final vote. He told the engineering team that the new ad serving system would be called AdWords. And so it came to be. e.o.m.
Larry Page looked a bit tired and his voice was hoarse when he sat down with Charlie Rose at the 30th annual TED conference in Vancouver, Canada last Wednesday (March 19, 2014). The interview aired on March 21 covered good ground, and gave more than a glimpse into his thinking, and the future trajectory of Google.
SEARCH IS STILL NOT A DONE THING According to Larry, computing is still very clunky. Computers still don’t know where you are, what you know, and what you’re doing. Search is very much a deep thing, and we are still in the early stages of it. According to him, Search is still not a done thing for Google. He considers voice as something very important in the journey to make computers understand us. Hence the recent acquisition by Google of the company DeepMind, which is into machine learning. A video was played on stage, about a Kenyan villager finding solutions to everyday problems like a remedy for the diseased potatoes in his farm, and then helping his friends as well, using the search power of Google. The villager, Zack Matere from Soy, Kenya, is also a philosopher of sorts and had this gem to offer: “The follow-up to information is knowledge Once people have knowledge, they can find solutions without being helped out. Information is powerful, but it's how we use it that will define us.” HOW BALLOONS SOLVED THE ACCESS PROBLEM Two-thirds of the world still doesn’t have access to reliable internet. And that’s a worry for Google, which cannot fulfill its stated mission to organize the world’s knowledge without everyone able to perform at least basic search on their computers. When Google worked on a cost-efficient solution for this, one problem they encountered was providing cheap access points from up in the sky. The realization dawned that while it takes a lot of time to launch satellites, balloons can be launched very quickly. So it’s fairly easy to use balloons and create a worldwide mesh to provide internet coverage, Google decision-makers concluded, says Larry. Maintaining the altitude of the balloons was a problem. But then Google being what it is, engineers there did some weather simulations which were never done before and found out that “if you control the altitude of the balloons which you can do by pumping air into them, you can roughly control where they are”. So now, Google is giving this project scale — launching balloons all over the world with the necessary equipment onboard, so that they serve the function of satellites and create a worldwide grid to solve the internet access problem. The whole thought process behind this project stemmed from Larry’s research finding that someone had sent up a balloon 30-40 years ago which went around the earth multiple times. That set him thinking as to why balloons could not be used to create a global mesh, and Google is now on that journey. THE ECONOMIC CONCEPT OF ADDITIONALITY When Charlie Rose asked Larry to give him a sense of the philosophy of his mind, the Google co-founder was more than willing to share his thoughts. Apparently, he is an adherent of the Economic philosophy of Additionality. What it means is that it's by doing things that you learn. The more you do things, the bigger is your impact. By doing things that people think may not be possible, you can accomplish much, says Larry. The more you learn about technology, the more you learn what's possible. So Additionality means you make things happen by doing things. CORPORATIONS CAN BE AGENTS OF CHANGE A related theme to which Larry is committed is that corporations can be agents of change. The theme came into focus because Charlie Rose highlighted his contribution to Elon Musk to make the Mars journey possible. It was the unusualness of a charitable grant to a corporation which Rose highlighted. According to Larry, most people think that companies are basically evil. But in technology we need revolutionary change, not incremental change. Commercialization in a way that makes revolutionary change possible, is very important, he said. Perhaps he may be hinting that commercialization of revolutionary change can only be done by corporations. In Larry's view, invention is not enough. Tesla invented electric power, but he struggled to get it out to people. He was beaten by other people. Innovation and invention focus plus commercialization in a way that makes it possible is Larry’s mantra. So there is nobility in what corporations are doing. He asks people to think why the company that you are working is worthy not just of your time, but your money as well. According to Larry, we don't have a concept of making philanthropic contributions to corporations. So his contribution to Musk’s mars venture should be seen as an attempt to inspire other people. GOOGLE’S WORK ON TRANSPORTATION Google’s research on driverless cars also came up during the interview. It seems this too was fueled by Larry’s long-term obsession with transportation systems, which started when he was at college in Michigan. Eighteen years ago, he first heard about people working on driverless cars. Now Google’s driverless cars are almost ready for commercial launch, having clocked 100,000 miles of fully-automated travel. Larry says 20 million people are injured in vehicle mishaps every year around the world. It’s also the leading cause of the deaths of people under 35. Let’s hope the driverless cars powered by Google’s software can make a drastic improvement to this appalling record. The interview was also not without its intimate moments, when Larry referred to his illness repeatedly, first as a joke when he said he is in the elite company of people like Bill Gates and Edward Snowden who are incapable of delivering a TED Talk, and hence had to be interviewed. The reference to the condition again came up when he talked about Google’s ongoing work with regard to medical research, inspired no doubt, by his situation. As is well known, Larry Page's right and left vocal chords are paralyzed, which makes speaking difficult for him, and has affected his voice quality. The Business Insider website recently revealed the condition to be called Hashimoto's Thyroiditis. e.o.m. We’ve been paying so much attention to Google Search that our treatment of other search engines may not look evenhanded. Competition is good for everyone. At the moment, Bing, Google’s worthy competitor from the Microsoft stable, is lagging behind in market share, but please don’t tell anyone at Microsoft that Bing cannot compete with Google head to head. Recently, when Satya Nadella took over the CEO of Microsoft some experienced SEO analysts did not forget to point out that Search veterans were now heading Google (Larry Page), Microsoft (Bing), as well as Yahoo (Marissa Meyer). It has not been lost on many that Nadella once headed the unit which oversaw Bing as well. My money is on Microsoft to invest enough resources on the Bing team that they always remain a close competitor to Google in terms of the quality of the Search results, if not market share. Bing offers plenty of value-added free tools for the searcher, which are not limited to Keyword Research and Webmaster Tools. Bing also powers Search for Yahoo!, and together with Yahoo!, offers the combined Yahoo!Bing network for PPC advertising on the web. Even the combination doesn’t come anywhere near Google on both organic Search as well as PPC Ads, but still they remain in play as a serious act in town. DOES BING OFFER SUPERIOR SEARCH? This brings us to the question: Are there any areas in which Bing offers a Search experience superior to that of Google? In my limited experience, there are a few. For instance, Bing offers conversion to bitcoin, though Google doesn’t. Not sure how relevant it is, but given the popularity graph of bitcoin, I’m guessing Google may soon offer the service. Versus Anything else? Sure, I noticed that when you search for books, Bing offers a superior experience. For one, it’s possible in Bing to return a results page without the presence of Amazon.com. I’ve not seen this happen in Google. Also, I noticed that Bing always picks up the average reader rating of a book from Amazon.com and all other booksellers in the results page. But I noticed that Google doesn’t pick up the average reader ratings from Amazon.com, though it may do so from other booksellers: Are we done? Not yet. I was merely trying to highlight that there are some areas in which Bing Search results could be superior. Let’s continue our research some more in the next post.
e.o.m. More nuggets from I’m Feeling Lucky to help us understand Sergey Brin better. Douglas’ experience with running the first set of banner ads for Google made him realize how much data driven the company’s founders were. Google had barter arrangements with Netscape and Go2Net. So the marketing department needed to come up with banner ads. “How many can you have by tomorrow?” Sergey asked. “Why don’t you start with a 100.” Douglas threw himself at the task, roping in a copy writer and an artist. All the banners were critiqued. The comments from the founders never seemed to end, until one fine day, Sergey again said: “We should try a bunch of these out. And see how they perform. We are wasting time by not running them and getting the ultimate feedback — clickthroughs.
Eventually the ads began to run, and Douglas says the pressure on him seemed to ease as he started the practise of plugging in data about which ads were attracting more CTRs into a spreadsheet, and hand-delivering them to Larry and Sergey. He would pull ads which were not working, and replace them with new banners. “I became a convert to the power of data persuasiveness,” Douglas writes. Google is now justly famous for A/B testing its headlines and design. Its popular Google Analytics tool, offered free to website owners, even features a fairly extensive A/B testing function. HOW A/B TESTING BEGAN AT GOOGLE Wired magazine ran an article on the origins of A/B testing at Google some time ago. The rotation of banner ads narrated above by Douglas Edwards is of course not included in the history of the evolution of A/B testing at the search giant. This is what the Wired article said about it: << At Google — whose rise as a Silicon Valley powerhouse has done more than anything else to spread the A/B gospel over the past decade — engineers ran their first A/B test on February 27, 2000. They had often wondered whether the number of results the search engine displayed per page, which then (as now) defaulted to 10, was optimal for users. So they ran an experiment. To 0.1 percent of the search engine’s traffic, they presented 20 results per page; another 0.1 percent saw 25 results, and another, 30. Due to a technical glitch, the experiment was a disaster. The pages viewed by the experimental groups loaded significantly slower than the control groups, causing the relevant metrics to tank. But that in itself yielded a critical insight — tenths of a second could make or break user satisfaction in a precisely quantifiable way. Soon Google tweaked its response times and allowed real A/B testing to blossom. In 2011 the company ran more than 7,000 A/B tests on its search algorithm. Amazon.com, Netflix, and eBay are also A/B addicts, constantly testing potential site changes on live (and unsuspecting) users. >> ‘DON’T INVOLVE EVERYONE; IT’S A WASTE OF TIME’ The UI team was debating names and designs for the browser button, a piece of code that allowed users to add Google search links to their browsers, writes Douglas. The mailing list had swelled to 10. Sergey found it ridiculous. Instead of exhaustive user testing and internal deliberations, Sergey said Google should just put up the service and test it when they get a chance. “Don’t forget. We can change it at the drop of a hat,” he said. ‘Launch first, iterate later’ was Sergey’s perspective. Then he expounded his philosophy. “I don’t think we should have any meetings about a project like this,” he said, “or any group emails except the one to users announcing the launch. Having everyone involved in every issue is not a good use of anyone’s time.” (To be continued. Vignettes taken from the book I’m Feeling Lucky by Douglas Edwards) |
Archives
December 2014
AuthorI'm Georgy S. Thomas, the chief SEO architect of SEOsamraat. The Searchable site will track interesting developments in the world of Search Engine Optimization, both in India as well as abroad. Categories
All
|