Friday, December 21, 2007

A Festivus for our webmasterus

If it's good enough for the Costanzas, it's good enough for Webmaster Central: it's time for a Festivus for the rest of us (webmasterus)!
Webmaster Central holiday photo
Our special celebration begins not with carols and eggnog, but by remembering some of the popular Webmaster Tools features -- make that Feats of Strength -- for 2007. This year, you gained the ability to chickity-check out your backlinks (<-- that's Festivus-inspired anchor text) and tell Google you want out with URL Removal. And let's not forget Message Center and IDNA support, perfect for those times when [a-zA-Z0-9\-] just doesn't cut it.

Feel the power! Festivus Feats of Strength!

Now comes our webmaster family's traditional Airing of Grievances. You can air your woes and "awww man!"s in the comments below. Just remember that bots may crawl this blog, but we humans review the comments, so please keep your grievances constructive. :) Let us know about features you'd like implemented in Webmaster Tools, articles you'd like written in our blog or Help Center, and stuff you'd like to see in the discussion group. Bonus points if you also explain how your suggestion helps the whole Internet—not just your site's individual rankings. (But of course, we understand that your site ranking number one for all queries in all regions is truly, objectively good for everyone.)

Last, there are so many Festivus Miracles to share! Such as the many helpful members of the discussion group from all around the world, the new friendships formed between Susan Moskwa, JohnMu, Wysz, Matt D, Bergy, Patrick, Nathanj and so many webmasters, and the fun of chatting with our video watchers, fellow conference attendees, and those in the blogosphere keepin' it real.

On behalf of the entire Webmaster Central team, here's to you, Festivus Miracle and Time Magazine's Person of the Year in 2006 -- happy holidays. See you in 2008. :)

Tuesday, December 18, 2007

The Ultimate Fate of Supplemental Results

In 2003, Google introduced a "supplemental index" as a way of showing more documents to users. Most webmasters will probably snicker about that statement, since supplemental docs were famous for refreshing less often and showing up in search results less often. But the supplemental index served an important purpose: it stored unusual documents that we would search in more depth for harder or more esoteric queries. For a long time, the alternative was to simply not show those documents at all, but this was always unsatisfying—ideally, we would search all of the documents all of the time, to give users the experience they expect.

This led to a major effort to rethink the entire supplemental index. We improved the crawl frequency and decoupled it from which index a document was stored in, and once these "supplementalization effects" were gone, the "supplemental result" tag itself—which only served to suggest that otherwise good documents were somehow suspect—was eliminated a few months ago. Now we're coming to the next major milestone in the elimination of the artificial difference between indices: rather than searching some part of our index in more depth for obscure queries, we're now searching the whole index for every query.

From a user perspective, this means that you'll be seeing more relevant documents and a much deeper slice of the web, especially for non-English queries. For webmasters, this means that good-quality pages that were less visible in our index are more likely to come up for queries.

Hidden behind this are some truly amazing technical feats; serving this much larger of an index doesn't happen easily, and it took several fundamental innovations to make it possible. At this point it's safe to say that the Google search engine works like nothing else in the world. If you want to know how it actually works, you'll have to come join Google Engineering; as usual, it's all triple-hush-hush secrets.*

* Originally, I was going to give the stock Google answer, "If I told you, I'd have to kill you." However, I've been informed by management that killing people violates our "Don't be evil" policy, so I'm forced to replace that with sounding mysterious and suggesting that good engineers come and join us. Which I'm dead serious about; if you've got the technical chops and want to work on some of the most complex and advanced large-scale software infrastructure in the world, we want you here.

Taking feeds out of our web search results

As a webmaster, you may have been concerned about your RSS/Atom feeds crowding out their associated HTML pages in Google's search results. By serving feeds, we could cause a poor user experience:
  1. Feeds increase the likelihood that users see duplicate search results.
  2. Users clicking on a feed may miss valuable content available only in the HTML page.
To address these concerns, we prevent feeds from being returned in Google's search results, with the exception of podcasts (feeds with multimedia enclosures). We continue to allow podcasts, because we noticed a significant number of them are standalone documents (i.e. no HTML page has the same content) or they have more complete item descriptions than the associated HTML page. However, if, as a webmaster, you'd like your podcasts to be excluded from Google's search results (e.g. if you have a vlog, its feed is probably a podcast), you can use Yahoo's spec for noindex feeds. If you use FeedBurner, making your podcast noindex is as simple as checking a box ("Noindex" under the "Publicize" tab).

As a user, you may ask yourself whether Google has a way to search for feeds. The answer is yes; both Google Reader and iGoogle allow searching for feeds to subscribe to.

We're aware that there are a few non-podcast feeds out there with no associated HTML pages, and thus removing these feeds for now from the search results might be less than ideal. We remain open to other feedback on how to improve the handling of feeds, and especially welcome your comments and questions in the Crawling, Indexing and Ranking subtopic of our Webmaster Help Group.

For the German version of this post, go to "Wir entfernen Feeds aus unseren Suchergebnissen."

Monday, December 17, 2007

Introducing Video Sitemaps

Written by John Fisher-Ogden, Software Engineer, and Amy Wu, Associate Product Manager

In our effort to help users search all the world's public videos, the Google Video team joined the Sitemaps folks to introduce Video Sitemaps—an extension of the Sitemap Protocol that helps make your videos more searchable via Google Video Search. By submitting this video-specific Sitemap in addition to your standard Sitemap, you can specify all the video files on your site, along with relevant metadata. Here's an example:

<urlset xmlns=""
<video:player_loc allow_embed="yes"></video:player_loc>
<video:title>My funny video</video:title>
<video:description>A really awesome video</video:description>

To get started, create a Video Sitemap, sign into Google Webmaster Tools, and add the Video Sitemap to your account.

Friday, December 14, 2007

FYI on Google Toolbar's latest features

The latest version of Google Toolbar for Internet Explorer (beta) just added a neat feature to help users arrive at your website, or at least see your content, even when things go awry.

It's frustrating for your users to mistype your URL and receive a generic "404 - Not Found" or try to access a part of your site that might be down.

Regardless of your site being useful and information-rich, when these issues arise, most users just move on to something else.  The latest release of Google Toolbar, however, helps users by detecting site issues and providing alternatives.

Website Optimizer or Website Optimiser? The Toolbar can help you find it even if you try "google.cmo" instead of "".

3 site issues detected by Google Toolbar

  1. 404 errors with default error pages
    When a visitor tries to reach your content with an invalid URL and your server returns a short, default error message (less than 512 bytes), the Toolbar will suggest an alternate URL to the visitor. If this is a general problem in your website, you will see these URLs also listed in the crawl errors section of your Webmaster Tools account.

    If you choose to set up a custom error page, make sure it returns result code 404. The content of the 404 page can help your visitors to understand that they tried to reach a missing page and provides suggestions regarding how to find the content they were looking for. When a site displays a custom error page the Toolbar will no longer provide suggestions for that site. You can check the behavior of the Toolbar by visiting an invalid URL on your site with the Google Toolbar installed.

  2. DNS errors
    When a URL contains a non-existent domain name (like, the Toolbar will suggest an alternate, similar looking URL with a valid domain name. 

  3. Connection failures
    When your server is unreachable, the Google Toolbar will automatically display a link to the cached version of your page. This feature is only available when Google is not explicitly forbidden from caching your pages through use of a robots meta tag or crawling is blocked on the page through the robots.txt file. If your server is regularly unreachable, you will probably want to fix that first; but it may also be a good idea to check the Google cache for your pages by looking at the search results for your site.

Suggestions provided by the Google Toolbar

When one of the above situations is found, the Toolbar will try to find the most helpful links for the user. That may include:
  • A link to the corrected URL
    When the Toolbar can find the most probable, active URL to match the user's input (or link they clicked on), it will display it right on top as a suggestion. The correction can be somewhere in the domain name, the path or the file name (the Toolbar does not look at any parameters in the URL).

  • A link to the cached version of the URL
    When Toolbar recognizes the URL in the Google cache, it will display a link to the cached version. This is particularly useful when the user can't access your pages for some reason. As mentioned above, Google may cache your URLs provided you're not explicitly forbidding this through use of a robots meta tag or the robots.txt file.

  • A link to the homepage or HTML site map of your site
    Sometimes going to the homepage or a site map page is the best way to find the page that a user is really looking for. Site map pages (these are not XML Sitemap files) are generally recognized based on the file name; if the Toolbar can find something called "sitemap.html" or similar, this page will probably be recognized as the site map page. Don't worry if your site map page is called something else; if a user decides to go to your homepage, they'll probably find it right away even if the Toolbar doesn't spot it.

  • A link to a higher level folder
    Sometimes the homepage or site map page is too far out and the user would be better off just going one step up in the hierarchy. When the Toolbar can recognize that your site's structure is based on folders and sub-folders, it may suggest a page one step back.

  • A search within your site for keywords found in the URL
    It's a good practice to use descriptive URLs. If the Toolbar can recognize keywords within the URL which the user tried to access, it will link to a site-search with those keywords. Even if the URL has changed significantly in the meantime, the search may be able to find similar content based on those keywords. For instance, if the URL was it will suggest a search for the words "party", "gifts" and "holidays" within the site

  • An open Google search box
    If all else fails, there's always a chance that similar content already exists elsewhere on the web. The Google web search can help your users to find it - the Toolbar will help you by adding the keywords found in the URL to the search box.

Are you curious already? Download the Google Toolbar for your browser and give it a try on your site!

To discuss how this feature can help visitors to your site, jump in to our Google Webmaster Help Group; or for general Google Toolbar questions, try the Toolbar group for Internet Explorer or the Toolbar group for Firefox.

Thursday, December 13, 2007

New: Content analysis and Sitemap details, plus more languages

We're always striving to help webmasters build outstanding websites, and in our latest release we have two new features: Content analysis and Sitemap details. We hope these features help you to build a site you could compare to a fine wine -- getting better and better over time.

Content analysis

To help you improve the quality of your site, our new content analysis feature should be a helpful addition to the crawl error diagnostics already provided in Webmaster Tools. Content analysis contains feedback about issues that may impact the user experience or that may make it difficult for Google to crawl and index pages on your site. By reviewing the areas we've highlighted, you can help eliminate potential issues that could affect your site's ability to be crawled and indexed. This results in better indexing of your site by Google and other search engines.

The Content analysis summary page within the Diagnostics section of Webmaster Tools features three main categories. Click on a particular issue type for more details:

  • Title tag issues
  • Meta description issues
  • Non-indexable content issues

content analysis usability section

Selecting "Duplicate title tags" displays a list of repeated page titles along with a count of how many pages contain that title. We currently present up to thirty duplicated page titles on the details page. If the duplicate title issues shown are corrected, we'll update the list to reflect any other pages that share duplicate titles the next time your website is crawled.

Also, in the Title tag issues category, we show "Long title tags" and "Short title tags." For these issue types we will identify title tags that are way too short (for example "IT" isn't generally a good title tag) or way too long (title tag was never intended to mean <insert epic novel here>). A similar algorithm identifies potentially problematic meta description tags. While these pointers won't directly help you rank better (i.e. pages with <title> length x aren't moved to the top of the search results), they may help your site display better titles and snippets in search results, and this can increase visitor traffic.

In the "Non-indexable content issues," we give you a heads-up of areas that aren't as friendly to our more text-based crawler. And be sure to check out our posts on Flash and images to learn how to make these items more search-engine friendly.

content analysis crawlability section

Sitemap details page

If you've submitted a Sitemap, you'll be happy when you see the additional information in Webmaster Tools revealing how your Sitemap was processed. You can find this information on the newly available Sitemap Details page which (along with information that was previously provided for each of your Sitemaps) shows you the number of the pages from your Sitemap that were indexed. Keep in mind the number of pages indexed from your Sitemap may not be 100% accurate because the indexed number is updated periodically, but it's more accurate than running a "" query on Google.

The new Sitemap Details page also lists any errors or warnings that were encountered when specific pages from your Sitemap were crawled. So the time you might have previously spent on crafting custom Google queries to determine how many pages from your Sitemap were indexed, can now be spent on improving your site. If your site is already the crème de la crème, you might prefer to spend the extra free time mastering your ice-carving skills or blending the perfect eggnog.

Here's a view of the new Sitemap details page:

Sitemaps are an excellent way to tell Google about your site's most important pages, especially if you have new or updated content that we may not know about. If you haven't yet submitted a Sitemap or have questions about the process, visit our Webmaster Help Center to learn more.

Webmaster Tools now available in Czech & Hungarian

We love expanding our product to help more people and in their language of choice. We recently put in effort to expand the number of Webmaster Tools available languages to Czech and Hungarian, in addition to the 20 other languages we already support. We won't be stopping here. Our desire to support even more languages in the future means that if your language of choice isn't currently supported, stay tuned -- there'll be even more supported languages to come.

We always love to hear what you think. Please visit our Webmaster Help Group to share comments or ask questions.

Thursday, December 6, 2007

Using ALT attributes smartly

Here's the second of our video blog posts. Matt Cutts, the head of Google's webspam team, provides some useful tips on how to optimize the images you include on your site, and how simply providing useful, accurate information in your ALT attributes can make your photos and pictures more discoverable on the web. Ms Emmy Cutts also makes an appearance.

Like videos? Hate them? Have a great idea we should cover? Let us know what you think in our Webmaster Help Group.

Update: Some of you have asked about the difference between the "alt" and "title" attributes. According to the W3C recommendations, the "alt" attribute specifies an alternate text for user agents that cannot display images, forms or applets. The "title" attribute is a bit different: it "offers advisory information about the element for which it is set." As the Googlebot does not see the images directly, we generally concentrate on the information provided in the "alt" attribute. Feel free to supplement the "alt" attribute with "title" and other attributes if they provide value to your users!

Tuesday, December 4, 2007

Answering more popular picks: meta tags and web search

Written by , Webmaster Trends Analyst, Zürich

In writing and maintaining accurate meta tags (e.g., descriptive titles and robots information), you help Google to more accurately crawl, index and return your site in search results. Meta tags provide information to all sorts of clients, such as browsers and search engines. Just keep in mind that each client will likely only interpret the meta tags that it uses, and ignore the rest (although they might be useful for other reasons).

Here's how Google would interpret meta tags of this sample HTML page:

<!DOCTYPE …><head>
<title>Traditional Swiss cheese fondue recipes<title>utilized by Google, accuracy is valuable to webmasters
<meta name="description" content="Cheese fondue is …">utilized by Google, can be shown in our search results
<meta name="revisit-after" content="14 days">not utilized by Google or other major search engines
<META name="verify-v1" content="e8JG…Nw=" />optional, for Google webmaster tools
<meta name="GoogleBot" content="noOdp">optional
<meta …>
<meta …>

<meta name="description" content="A description of the page">
This tag provides a short description of the page. In some situations this description is used as a part of the snippet shown in the search results. For more information, please see our blog post "Improve snippets with a meta description makeover" and the Help Center article "How do I change my site's title and description?" While the use of a description meta tag is optional and will have no effect on your rankings, a good description can result in a better snippet, which in turn can help to improve the quality and quantity of visitors from our search results.

<title>The title of the page</title>
While technically not a meta tag, this tag is often used together with the "description." The contents of this tag are generally shown as the title in search results (and of course in the user's browser when visiting the page or viewing bookmarks). Some additional information can be found in our blog post "Target visitors or search engines?", especially under "Make good use of page titles."

<meta name="robots" content="…, …">
<meta name="googlebot" content="…, …">
These meta tags control how search engines crawl and index the page. The "robots" meta tag specifies rules that apply to all search engines, the "googlebot" meta tag specifies rules that apply only to Google. Google understands the following values (when specifying multiple values, separate them with a comma):

The default rule is "index, follow" -- this is used if you omit this tag entirely or if you specify content="all." Additional information about the "robots" meta tag can be found in "Using the robots meta tag." As a side-note, you can now also specify this information in the header of your pages using the "X-Robots-Tag" HTTP header directive. This is particularly useful if you wish to fine-tune crawling and indexing of non-HTML files like PDFs, images or other kinds of documents.

<meta name="google" content="notranslate">
When we recognize that the contents of a page are not in the language that the user is likely to want to read, we often provide a link in the search results to an automatic translation of your page. In general, this gives you the chance to provide your unique and compelling content to a much larger group of users. However, there may be situations where this is not desired. By using this meta tag, you can signal that you do not wish for Google to provide a link to a translation for this page. This meta tag generally does not influence the ranking of the page for any particular language. More information can be found in the "Google Translate FAQ".

<meta name="verify-v1" content="…">
This Google webmaster tools-specific meta tag is used on the top-level page of your site to verify ownership of a site in webmaster tools (alternatively you may upload an HTML file to do this). The content value you put into this tag is provided to you in your webmaster tools account. Please note that while the contents of this meta tag (including upper and lower case) must match exactly what is provided to you, it does not matter if you change the tag from XHTML to HTML or if the format of the tag matches the format of your page. For details, see "How do I verify my site by adding a meta tag to my site's home page?"

<meta http-equiv="Content-Type" content="…; charset=…">
This meta tag defines the content-type and character set of the page. When using this meta tag, make sure that you surround the value of the content attribute with quotes; otherwise the charset attribute may be interpreted incorrectly. If you decide to use this meta tag, it goes without saying that you should make sure that your content is actually in the specified character set. "Google Webauthoring Statistics" has interesting numbers on the use of this meta tag.

<meta http-equiv="refresh" content="…;url=…">
This meta tag sends the user to a new URL after a certain amount of time, sometimes used as a simple form of redirection. This kind of redirect is not supported by all browsers and can be confusing to the user. If you need to change the URL of a page as it is shown in search engine results, we recommended that you use a server-side 301 redirect instead. Additionally, W3C's "Techniques and Failures for Web Content Accessibility Guidelines 2.0" lists it as being deprecated.

(X)HTML and Capitalization
Google can read both HTML and XHTML-style meta tags (regardless of the code used on the page). In addition, upper or lower case is generally not important in meta tags -- we treat <TITLE> and <title> equally. The "verify-v1" meta tag is an exception, it's case-sensitive.

revisit-after Sitemap lastmod and changefreq
Occasionally webmasters needlessly include "revisit-after" to encourage a search engine's crawl schedule, however this meta tag is largely ignored. If you want to give search engines information about changes in your pages, use and submit an XML sitemap. In this file you can specify the last-modified date and the change-frequency of the URLs on your site.

If you're interested in more examples or have questions about the meta tags mentioned above, jump into our Google Webmaster Help Group and join the discussion.

Update: In case you missed it, the other popular picks were answered in the Webmaster Help Group.

Saturday, December 1, 2007

Information about buying and selling links that pass PageRank

Our goal is to provide users the best search experience by presenting equitable and accurate results. We enjoy working with webmasters, and an added benefit of our working together is that when you make better and more accessible content, the internet, as well as our index, improves. This in turn allows us to deliver more relevant search results to users.

If, however, a webmaster chooses to buy or sell links for the purpose of manipulating search engine rankings, we reserve the right to protect the quality of our index. Buying or selling links that pass PageRank violates our webmaster guidelines. Such links can hurt relevance by causing:

- Inaccuracies: False popularity and links that are not fundamentally based on merit, relevance, or authority
- Inequities: Unfair advantage in our organic search results to websites with the biggest pocketbooks

In order to stay within Google's quality guidelines, paid links should be disclosed through a rel="nofollow" or other techniques such as doing a redirect through a page which is robots.txt'ed out. Here's more information explaining our stance on buying and selling links that pass PageRank:

February 2003: Google's official quality guidelines have advised "Don't participate in link schemes designed to increase your site's ranking or PageRank" for several years.

September 2005: I posted on my blog about text links and PageRank.

December 2005: Another post on my blog discussed this issue, and said

Many people who work on ranking at search engines think that selling links can lower the quality of links on the web. If you want to buy or sell a link purely for visitors or traffic and not for search engines, a simple method exists to do so (the nofollow attribute). Google’s stance on selling links is pretty clear and we’re pretty accurate at spotting them, both algorithmically and manually. Sites that sell links can lose their trust in search engines.

September 2006: In an interview with John Battelle, I noted that "Google does consider it a violation of our quality guidelines to sell links that affect search engines."

January 2007: I posted on my blog to remind people that "links in those paid-for posts should be made in a way that doesn’t affect search engines."

April 2007: We provided a mechanism for people to report paid links to Google.

June 2007: I addressed paid links in my keynote discussion during the Search Marketing Expo (SMX) conference in Seattle. Here's a video excerpt from the keynote discussion. It's less than a minute long, but highlights that Google is willing to use both algorithmic and manual detection of paid links that violate our quality guidelines, and that we are willing to take stronger action on such links in the future.

June 2007: A post on the official Google Webmaster Blog noted that "Buying or selling links to manipulate results and deceive search engines violates our guidelines." The post also introduced a new official form in Google's webmaster console so that people could report buying or selling of links.

June 2007: Google added more specific guidance to our official webmaster documentation about how to report buying or selling links and what sort of link schemes violate our quality guidelines.

August 2007: I described Google's official position on buying and selling links in a panel dedicated to paid links at the Search Engine Strategies (SES) conference in San Jose.

September 2007: In a post on my blog recapping the SES San Jose conference, I also made my presentation available to the general public (PowerPoint link).

October 2007: Google provided comments for a Forbes article titled "Google Purges the Payola".

October 2007: Google officially confirmed to Search Engine Land that we were taking stronger action on this issue, including decreasing the toolbar PageRank of sites selling links that pass PageRank.

October 2007: An email that I sent to Search Engine Journal also made it clear that Google was taking stronger action on buying/selling links that pass PageRank.

We appreciate the feedback that we've received on this issue. A few of the more prevalent questions:

Q: Is buying or selling links that pass PageRank a violation of Google's guidelines? Why?
A: Yes, it is, for the reasons we mentioned above. I also recently did a post on my personal blog that walks through an example of why search engines wouldn't want to count such links. On a serious medical subject (brain tumors), we highlighted people being paid to write about a brain tumor treatment when they hadn't been aware of the treatment before, and we saw several cases where people didn't do basic research (or even spellchecking!) before writing paid posts.

Q: Is this a Google-only issue?
A: No. All the major search engines have opposed buying and selling links that affect search engines. For the Forbes article Google Purges The Payola, Andy Greenberg asked other search engines about their policies, and the results were unanimous. From the story:

Search engines hate this kind of paid-for popularity. Google's Webmaster guidelines ban buying links just to pump search rankings. Other search engines including Ask, MSN, and Yahoo!, which mimic Google's link-based search rankings, also discourage buying and selling links.

Other engines have also commented about this individually, e.g. a search engine representative from Microsoft commented in a recent interview and said

The reality is that most paid links are a.) obviously not objective and b.) very often irrelevant. If you are asking about those then the answer is absolutely there is a risk. We will not tolerate bogus links that add little value to the user experience and are effectively trying to game the system.

Q: Is that why we've seen some sites that sell links receive lower PageRank in the Google toolbar?
A: Yes. If a site is selling links, that can affect our opinion about the value of that site or cause us to lose trust in that site.

Q: What recourse does a site owner have if their site was selling links that pass PageRank, and the site's PageRank in the Google toolbar was lowered?
A: The site owner can address the violations of the webmaster guidelines and submit a reconsideration request in Google's Webmaster Central console. Before doing a reconsideration request, please make sure that all sold links either do not pass PageRank or are removed.

Q: Is Google trying to tell webmasters how to run their own site?
A: No. We're giving advice to webmasters who want to do well in Google. As I said in this video from my keynote discussion in June 2007, webmasters are welcome to make their sites however they like, but Google in turn reserves the right to protect the quality and relevance of our index. To the best of our knowledge, all the major search engines have adopted similar positions.

Q: Is Google trying to crack down on other forms of advertisements used to drive traffic?
A: No, not at all. Our webmaster guidelines clearly state that you can use links as means to get targeted traffic. In fact, in the presentation I did in August 2007, I specifically called out several examples of non-Google advertising that are completely within our guidelines. We just want disclosure to search engines of paid links so that the paid links won't affect search engines.

Q: I'm aware of a site that appears to be buying/selling links. How can I get that information to Google?
A: Read our official blog post about how to report paid links from earlier in 2007. We've received thousands and thousands of reports in just a few months, but we welcome more reports. We appreciate the feedback, because it helps us take direct action as well as improve our existing algorithmic detection. We also use that data to train new algorithms for paid links that violate our quality guidelines.

Q: Can I get more information?
A: Sure. I wrote more answers about paid links earlier this year if you'd like to read them. And if you still have questions, you can join the discussion in our Webmaster Help Group.

Monday, November 26, 2007

The anatomy of a search result

When Matt Cutts, who heads up Google's webspam team, dropped by our Kirkland offices a little while ago we found ourselves with a video camera and an hour to spare. The result? We quickly put together a few videos we hope you'll find useful.

In our first video, Matt talks about the anatomy of a search result, and gives some useful tips on how you can help improve how your site appears in our results pages. This talk covers everything you'll see in a search result, including page title, page description, and sitelinks, and explains those other elements that can appear, such as stock quotes, cached pages links, and more.

If you like the video format (and even if you don't), or have ideas for subjects you'd like covered in the future, let us know what you think in our Webmaster Help Group. And rest assured, we'll be working to improve the sound quality for our next batch of vids.

Wednesday, November 21, 2007

A dozen ways to discuss "webmaster help"

Our goal for the Webmaster Help Group is to be an authoritative source for accurate, friendly information and discussion. There are many terrific members of the Webmaster Groups community, and we're glad to know them all. In our English discussion group, a big Webmaster Central (WMC) thank-you to our comrades and fellow webmasters for their helpful knowledge and insight: webado, Phil Payne, JLH, cass-hacks, cristina, Sebastian, and dockarl, just to name a few.

Webado and cass-hacks both speak several languages -- thankfully, some of us do as well. We now have Googlers posting to the Google Webmaster Help Group in 12 languages! Here's a brief introduction of the Googlers, most of whom work together at our European headquarters in Dublin, in the Non-English groups (several have been posting for months, but we'd still like to give them an intro). :)

French Webmaster Help Group
Salut, I come from the French city Bordeaux where I spent most of my time, before I moved to Paris and then Dublin where I work now in Google Search Quality. When not in front of my computer, I like to go to the cinema, play chess and organize dinners
with my friends.
- Guide Google
Italian Webmaster Help Group
Ciao, my name is Stefano and I’m responsible for the Italian Webmaster Help Group. I work on search quality issues in Italian. I’m from Italy and have been living in Ireland for more than 2 years. I do love the multicultural environment you can find in Dublin and all the people from everywhere you get to know here, but sometimes it’s difficult to be so far away from my favorite football team, so now and then I really have to fly back home to get a bit of Serie A.
- Guida Google
German Webmaster Help Group
Grüss Gott! My name is Uli, and I post in the German Webmaster Help Group. I am originally from Germany but live in Ireland now. Unfortunately, I don't have my own website to show off. The German Help Group has grown into a big, vibrant community of very helpful and savvy webmasters, so if you speak German, go and check it out!
- Google Webmeister Guide
Spanish Webmaster Help Group
Hola! My name is Alvar and I'll be monitoring the Spanish Webmaster Help Group. Please join us if you speak a word or two in Spanish :-) More on the personal side, I don't own a portal or something like that but rather a tiny blog with nearly no visibility on the Internet, and I'm happy with that. I studied telecommunication engineering and my hobbies include soccer, foosball, table tennis, basically almost any other sport, traveling, photography, cinema, and technology, so I admit sitting in front of a computer can be counted as a hobby :-) Another important fact about me is that I'm from Barcelona, a city everyone should visit at least once in their life. What are you waiting for?
- Guía de Google para webmasters

Hola, I'm Rebecca. I studied to be a librarian but somehow along the way ended up being drawn into the digital side of information. So while I still snuggle up to books at night, computers take up most of my day. As for things I like to do (but wouldn't go so far as to call them hobbies…) I'm still pretty new to Dublin so I rather enjoy walking around until I'm lost and then trying to figure out how to get back home, and then when I get back home I like to play with my cat, best known for her fantastic Gollum impersonation when she gets riled up.
- Guía de Google
Dutch Webmaster Help Group
Hallo, I'm Andre. I'm very fond of Dutch music. But since living in Dublin for almost 2 years now, my taste for music has fused with the Irish sound. I like listening to live music in pubs, hanging out with the locals, have a pint or two and talking about upcoming gigs, artists, and all other topics that pass the day.
- André
Swedish Webmaster Help Group
Hejsan! My name is Hessam and I'm responsible for the Swedish Webmaster Help Group. I've been with Google for the last 2 years, working on search quality issues in Sweden. I'm originally from Sweden but moved to Dublin two years ago. My main interest is traveling and living in Dublin makes it easy to visit to all corners of Europe without blowing the budget. Thanks to cheap airlines, it takes merely a few hours from my door to the beer gardens of Munich, wine bars of Paris, ski slopes of Italy or beaches of Spain, depending on the mood. Looking forward to talking to you all!
- Google Webbansvarig Guide
Finnish Webmaster Help Group
Hei, I'm Anu and I work in the Search Quality team. I'm originally from Finland but these days I hold my umbrella high in Dublin. When I'm not online, you can catch me cycling (be it one or two wheels), playing virtual tennis or at the airport. I've been bitten by the travel bug, and try to see as many places near and far as possible. Besides all things webmaster related, I also have an interest in foreign languages, books and films. I look forward to meeting you in the Finnish Webmaster Help Group!
- Googlen Web-ylläpidon Ryhmän Opas
Polish Webmaster Help Group
Cześć, I'm Guglarz (it stands for Googler in Polish), the Googler on the Polish Webmaster Help Group. I was lucky to grow up in the city of Kraków, Poland's most beautiful city and the place where Google recently opened a research center. I've been with Google for two years now and I still love this job as much as I did the very first day. It's my favorite hobby activity in fact. If I don't work, I like to keep myself busy with general aviation, running or bowling, a sport I recently found out I was talented in. ;-)

I discovered my passion for the Internet early in school and after graduating in information science studies I was looking for a challenging position in the industry, although after the year 2000 crash there was little hope for that. It took me a couple of jobs in the established industries and some traveling around the globe before I found my dream job here at Google.

Ever since I started helping on the Polish Webmaster Help Group, it has been growing rapidly, both in terms of user numbers as much as in terms of the activity. It's really exciting to see how Polish webmasters help each other and make the web a more interesting place. Three group members, Cezary Lech, Umik and krzys in particular made an effort to vitalize the community in its early days. I'd like to say dziękuję (thank you in Polish) and please keep up the great spirit - thumbs up!
- Guglarz
Portuguese Webmaster Help Group
Olá, my name is Pedro. I'm Portuguese and I'm part of the Search Quality team. I've been working at Google since March 2006 mostly focused on the Portuguese language markets. I grew up in Tavira, a small town in the Algarve region – South of Portugal – and I always had a nerdy side, playing with computers since my very early days when memory meant 128KB. Most of my interests fall on my origins, I enjoy sailing and scuba diving, music is also on my top list. I'm based in the European Headquarters – Dublin office, and I'll be looking to strengthening contact with Portuguese webmasters (non Portuguese are also welcome).
- Ajuda a Webmasters do Google
Russian Webmaster Help Group
Привет! My name is Oxana and I come from Moldova, a teeny tiny country in Eastern Europe. My background is in mathematics and computer sciences and I have worked as a web developer for more than 7 years now. Of course I have a web site, but it features only an, unfortunately, eternal "under construction" message and a hope for a better future. :) I love to read and to travel, and at the moment I am a helpless wannabe photographer. Also, I'm a passionate WoW player and soon I'll become the best Warlock Orc on this side of Kalimdor! When I'm a grown-up person I work at Google on the Search Quality team and I primarily support the Russian market.
- Оксана
Danish Webmaster Help Group
Hej, my name is Jonas, and I am from Copenhagen, the wonderful capital of beautiful Denmark. I've been a webmaster of a blog since 2001, where I still drop a few lines every now and then. I am a jack of many trades, with a background in human geography and communication, design, and media. I've done some authoring for the web, but mostly administrative backends in PHP/MySQL, so they are not that interesting. I've been active on Usenet for awhile as well, and spent many hours there, getting smarter with the help of others.

I've been with Google for a couple of years now, working exclusively with search quality and I am now helping out in the Danish Webmaster Help Group. Looking forward to seeing you there (:
- GoogleGuide

Monday, November 19, 2007

Bringing the conference to you

We're fortunate to meet many of you at conferences, where we can chat about web search and Webmaster Tools. We receive a lot of good feedback at these events: insight into the questions you're asking and issues you're facing. However, as several of our Webmaster Help Group friends have pointed out, not everyone can afford the time or expense of a conference; and many of you live in regions where webmaster-related conferences are rare.

So, we're bringing the conference to you.

We've posted notes in our Help Group from conferences we recently attended:
Next month, Jonathan and Wysz will post their notes from PubCon, while Bergy and I will cover SES Chicago.

If you can make it to one of these, we'd love to meet you face to face, but if you can't, we hope you find our jottings useful.

Monday, November 12, 2007

Go Daddy and Google offer easy access to Webmaster Tools

Welcome Go Daddy webmasters to the Google Webmaster Tools family! Today, we're announcing that Go Daddy, the world's largest hostname provider in the web hosting space, is working with us as a pilot partner so that their customers can more easily access Google Webmaster Tools. Go Daddy is a great partner, and we hope to educate more webmasters on how to make their site more search engine-friendly.

Go Daddy users will now see our link right in their hosting control center, and can launch Google Webmaster Tools directly from their hosting account. And Go Daddy makes the Google Webmaster Tools account creation process faster by adding the site, verifying the site, and submitting Sitemaps on behalf of hosting customers. Our tools show users how Google views their site, give useful stats like queries and links, diagnose problems, and share information with us in order to improve their site's visibility in search results.

As a continuation of these efforts, we look forward to working with other web hosting companies to add Google Webmaster Tools to their products soon.

And in case you're wondering, Webmaster Tools will stay 100% the same for current users. If you have questions or suggestions about our partnership with Go Daddy, let us know in our Webmaster community discussion groups.

Tuesday, November 6, 2007

A spider's view of Web 2.0

Update on July 29, 2010: We've improved our Flash indexing capability and we also now support an AJAX crawling scheme! Please check out the posts (linked above) for more details.

Many webmasters have discovered the advantages of using Ajax to improve the user experience on their sites, creating dynamic pages that act as powerful web applications. But, like Flash, Ajax can make a site difficult for search engines to index if the technology is not implemented carefully. As promised in our post answering questions about Server location, cross-linking, and Web 2.0 technology, we've compiled some tips for creating Ajax-enhanced websites that are also understood by search engines.

How will Google see my site?

One of the main issues with Ajax sites is that while Googlebot is great at following and understanding the structure of HTML links, it can have a difficult time finding its way around sites which use JavaScript for navigation. While we are working to better understand JavaScript, your best bet for creating a site that's crawlable by Google and other search engines is to provide HTML links to your content.

Design for accessibility

We encourage webmasters to create pages for users, not just search engines. When you're designing your Ajax site, think about the needs of your users, including those who may not be using a JavaScript-capable browser. There are plenty of such users on the web, including those using screen readers or mobile devices.

One of the easiest ways to test your site's accessibility to this type of user is to explore the site in your browser with JavaScript turned off, or by viewing it in a text-only browser such as Lynx. Viewing a site as text-only can also help you identify other content which may be hard for Googlebot to see, including images and Flash.

Develop with progressive enhancement

If you're starting from scratch, one good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with Ajax. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your Ajax bonuses.

Of course you will likely have links requiring JavaScript for Ajax functionality, so here's a way to help Ajax and static links coexist:
When creating your links, format them so they'll offer a static link as well as calling a JavaScript function. That way you'll have the Ajax functionality for JavaScript users, while non-JavaScript users can ignore the script and follow the link. For example:

<a href=”ajax.htm?foo=32” onClick=”navigate('ajax.html#foo=32'); return false”>foo 32</a>

Note that the static link's URL has a parameter (?foo=32) instead of a fragment (#foo=32), which is used by the Ajax code. This is important, as search engines understand URL parameters but often ignore fragments. Web developer Jeremy Keith labeled this technique as Hijax. Since you now offer static links, users and search engines can link to the exact content they want to share or reference.

While we're constantly improving our crawling capability, using HTML links remains a strong way to help us (as well as other search engines, mobile devices and users) better understand your site's structure.

Follow the guidelines

In addition to the tips described here, we encourage you to also check out our Webmaster Guidelines for more information about what can make a site good for Google and your users. The guidelines also point out some practices to avoid, including sneaky JavaScript redirects. A general rule to follow is that while you can provide users different experiences based on their capabilities, the content should remain the same. For example, imagine we've created a page for Wysz's Hamster Farm. The top of the page has a heading of "Wysz's Hamster Farm," and below it is an Ajax-powered slideshow of the latest hamster arrivals. Turning JavaScript off on the same page shouldn't surprise a user with additional text reading:
Wysz's Hamster Farm -- hamsters, best hamsters, cheap hamsters, free hamsters, pets, farms, hamster farmers, dancing hamsters, rodents, hampsters, hamsers, best hamster resource, pet toys, dancing lessons, cute, hamster tricks, pet food, hamster habitat, hamster hotels, hamster birthday gift ideas and more!
A more ideal implementation would display the same text whether JavaScript was enabled or not, and in the best scenario, offer an HTML version of the slideshow to non-JavaScript users.

This is a pretty advanced topic, so please continue the discussion by asking questions and sharing ideas over in the Webmaster Help Group. See you there!

Wednesday, October 31, 2007

Happy Halloween to our spooktacular webmasters!

With apologizes to Vic Mizzy, we've written short verse to the tune of the "Addams Family" theme (please use your imagination):

We may be hobbyists or just geeky,
Building websites and acting cheeky,
Javascript redirects we won't make sneaky,
Our webmaster fam-i-ly!

Happy Halloween everyone! Feel free to join the discussion and share your Halloween stories and costumes.

Magnum P.I., Punk Rocker, Rubik's Cube, Mr. T., and Rainbow Brite
a.k.a. Several members of our Webmaster Tools team: Dennis Geels, Jonathan Simon, Sean Harding, Nish Thakkar, and Amanda Camp

Panda and Lolcat
Or just Evan Tang and Matt Cutts?

7 Indexing Engineers and 1 Burrito

Cheese Wysz, Internet Repairman, Community Chest, Internet Pirate (don't tell the RIAA)
Helpful members of the Webmaster Help Group: Wysz, MattD, Nathan Johns (nathanj) , and Bergy

Webspam Engineer Shashi Thakur (in the same outfit he wore to Searchnomics)

Hawaiian Surfer Dude and Firefox
Members of Webmaster Central's communications team: Reid Yokoyama and Mariya Moeva

Napolean Dynamite and Raiderfan
Shyam Jayaraman (speaking at SES Chicago, hopefully doing the dance) and me

Better geographic choices for webmasters

Written by Amanda Camp, Webmaster Tools and Trystan Upstill, International Search Quality Team

Starting today Google Webmaster Tools helps you better control the country association of your content on a per-domain, per-subdomain, or per-directory level. The information you give us will help us determine how your site appears in our country-specific search results, and also improves our search results for geographic queries.

We currently only allow you to associate your site with a single country and location. If your site is relevant to an even more specific area, such as a particular state or region, feel free to tell us that. Or let us know if your site isn't relevant to any particular geographic location at all. If no information is entered in Webmaster Tools, we'll continue to make geographic associations largely based on the top-level domain (e.g. or .ca) and the IP of the webserver from which the context was served.

For example, if we wanted to associate with Hungary:

But you don't want" associated with any country...

This feature is restricted for sites with a country code top level domain, as we'll always associate that site with the country domain. (For example, will always be the version of Google associated with Russia.)

Note that in the same way that Google may show your business address if you register your brick-and-mortar business with the Google Local Business Center, we may show the information that you give us publicly.

This feature was largely initiated by your feedback, so thanks for the great suggestion. Google is always committed towards helping more sites and users get better and more relevant results. This is a new step as we continue to think about how to improve searches around the world.

We encourage you to tell us what you think in the Webmaster Tools section of our discussion group.

Thursday, October 25, 2007

Dealing with Sitemap cross-submissions

Since the launch of Sitemaps, webmasters have been asking if they could submit their Sitemaps for multiple hosts on a single dedicated host. A fair question -- and now you can!

Why would someone want to do this? Let's say that you own and and you have Sitemaps for both hosts, e.g. sitemap-example.xml and sitemap-mysite.xml. Until today, you would have to store each Sitemap on its respective host. If you tried to place sitemap-mysite.xml on, you would get an error because, for security reasons, a Sitemap on can only contains URLs from So how do we solve this? Well, if you can "prove" that you own or control both of these hosts, then either one can host a Sitemap containing URLs for the other. Just follow the normal verification process in Google Webmaster Tools and any verified site in your account will be able to host Sitemaps for any other verified site in the same account.

Here is an example showing both sites verified:

And now, from a single host, you can submit Sitemaps for both sites without any errors. sitemap-example.xml contains URLs from and sitemap-mysite.xml contains URLs from but both now reside on
We've also added more information on handling cross-submits in our Webmaster Help Center.
For those of you wondering how this affects the other search engines that support the Sitemap Protocol, rest assured that we're talking to them about how to make cross-submissions work seamlessly across all of them. Until then, this specific solution will work only for users of Google Webmaster Tools.

Thursday, October 18, 2007

Blast from the past

Written by Sahala Swenson, Webmaster Tools Team

As you know, the queries used to find your website in search results can change over time. Your website content changes, as do the needs of all the busy searchers out there. Whether the queries associated with your site change subtly or dramatically, it's pretty useful to see how they transform over time.

Recognizing this, Top Search Queries in Webmaster Tools now presents historical data and other enhancements. Let's take a closer look:

Up to 6 months of historical data:
Previously we only showed query stats for the last 7 days. Now you can jump between 9 query stats snapshots ranging from now to 6 months ago. Note that the time interval for each of these snapshots is different. For the 7 day, 2 week, and 3 week snapshots, we report the top queries for the previous week. For the 1 to 6 month snapshots, we report statistics for the previous month. And still others of you who log in may notice that you don't have query stats data going back to 6 months ago. We hope to improve that experience in the future. :)

Top query percentages:
You might have noticed a new column in the top query listings. Previously we just ranked your query results and clicks. While useful, this didn't really tell you to what extent one query was ranked higher than another. Now we show what percentage each query result or click represents out of the top 20 queries. This should help you see how well the result or click volume is distributed in the top 20.


Since we're now showing historical data on the Top Search Queries screen, we figured it would be rude to not let you download it all and play with the data yourself (spreadsheet masochists, I'm looking at you). We added a “Download data” link that lets you download all the stats in CSV format. Note that this exports all query stats historical data across all snapshots as well as search types and languages, so you can slice and dice to your satisfaction. The “Download all stats (including subfolders)” link, however, will still only show query stats for your site and sub-folders for the last 7 days.


We've improved data freshness in Webmaster Tools a couple of times in the past, and we've done it again with the new Top Search Queries. Statistics are being now updated constantly. Top query results and clicks may visibly change rank a lot more often now, sometimes daily.

So enough talk. Sign in and play around with the new improvements for yourself. As always we welcome feedback (especially in the form of beer), so feel free to drop us a note in the Webmaster Help Group and let us know what you think.

Introducing Code Search Sitemaps

Update: Code Search Sitemaps are no longer supported. More information.

The Sitemaps team is continuing its trend of extending the Sitemap Protocol for specific products and content types. Our latest work with the Google Code Search team now enables you to create Sitemaps that contain information about public source code you host and would like to include in Code Search. There's more information about this new functionality on the Google Code blog. If you're eager to get going, take a look at our Help Center documentation, create a Code Search Sitemap, sign into Google Webmaster Tools, and submit a Sitemap for Code Search!

Webmasters can now provide feedback on Sitelinks

Sitelinks are extra links that appear below some search results in Google. They serve as shortcuts to help users quickly navigate to the important pages on your site.

Selecting pages to appear as sitelinks is a completely automated process. Our algorithms parse the structure and content of websites and identify pages that provide fast navigation and relevant information for the user's query. Since our algorithms consider several factors to generate sitelinks, not all websites have them.

Now, Webmaster Tools lets you view potential sitelinks for your site and block the ones you don't want to appear in Google search results. Because sitelinks are extremely useful in helping users navigate your site, we don't typically recommend blocking them. However, occasionally you might want to exclude a page from your sitelinks, for example: a page that has become outdated or unavailable, or a page that contains information you don't want emphasized to users. Once you block a page, it won't appear as a sitelink for 90 days unless you choose to unblock it sooner. It may take a week or so to remove a page from your sitelinks, but we are working on making this process faster.

To view and manage your sitelinks, go to the Webmaster Tools Dashboard and click the site you want. In the left menu click Links, then click Sitelinks.
Thanks for your feedback and stay tuned for more updates!

Update: the user-interface for this feature has changed. For more information, please see the Sitelinks Help Center article.

Monday, October 8, 2007

Data freshness

Common feedback we hear from webmasters is that you want us to improve the freshness of the data in Webmaster Tools. Understood. :) We've increased the update frequency for your verified sites' data, such as crawl, index, and search query stats. Much of this data depends on the content of your site. If your content doesn't change very often, or if you're not getting new links to your site, you may not see updates to your data every time you sign in to Webmaster Tools.

Please continue to post your Suggestions & feature requests in the Webmaster Help Group. It's one of our most important sources of feedback from the webmaster community. We seriously take it seriously.

Thursday, September 27, 2007

Improve snippets with a meta description makeover

The quality of your snippet — the short text preview we display for each web result — can have a direct impact on the chances of your site being clicked (i.e. the amount of traffic Google sends your way). We use a number of strategies for selecting snippets, and you can control one of them by writing an informative meta description for each URL.

<META NAME="Description" CONTENT="informative description here">

Why does Google care about meta descriptions?
We want snippets to accurately represent the web result. We frequently prefer to display meta descriptions of pages (when available) because it gives users a clear idea of the URL's content. This directs them to good results faster and reduces the click-and-backtrack behavior that frustrates visitors and inflates web traffic metrics. Keep in mind that meta descriptions comprised of long strings of keywords don't achieve this goal and are less likely to be displayed in place of a regular, non-meta description, snippet. And it's worth noting that while accurate meta descriptions can improve clickthrough, they won't affect your ranking within search results.

Snippet showing quality meta description

Snippet showing lower-quality meta description

What are some good meta description strategies?
Differentiate the descriptions for different pages
Using identical or similar descriptions on every page of a site isn't very helpful when individual pages appear in the web results. In these cases we're less likely to display the boilerplate text. Create descriptions that accurately describe each specific page. Use site-level descriptions on the main home page or other aggregation pages, and consider using page-level descriptions everywhere else. You should obviously prioritize parts of your site if you don't have time to create a description for every single page; at the very least, create a description for the critical URLs like your homepage and popular pages.

Include clearly tagged facts in the description
The meta description doesn't just have to be in sentence format; it's also a great place to include structured data about the page. For example, news or blog postings can list the author, date of publication, or byline information. This can give potential visitors very relevant information that might not be displayed in the snippet otherwise. Similarly, product pages might have the key bits of information -- price, age, manufacturer -- scattered throughout a page, making it unlikely that a snippet will capture all of this information. Meta descriptions can bring all this data together. For example, consider the following meta description for the 7th Harry Potter Book, taken from a major product aggregator.

Not as desirable:
<META NAME="Description" CONTENT="[domain name redacted]
: Harry Potter and the Deathly Hallows (Book 7): Books: J. K. Rowling,Mary GrandPré by J. K. Rowling,Mary GrandPré">

There are a number of reasons this meta description wouldn't work well as a snippet on our search results page:
  • The title of the book is complete duplication of information already in the page title.
  • Information within the description itself is duplicated (J. K. Rowling, Mary GrandPré are each listed twice).
  • None of the information in the description is clearly identified; who is Mary GrandPré?
  • The missing spacing and overuse of colons makes the description hard to read.

All of this means that the average person viewing a Google results page -- who might spend under a second scanning any given snippet -- is likely to skip this result. As an alternative, consider the meta description below.

Much nicer:
<META NAME="Description" CONTENT="Author: J. K. Rowling, Illustrator: Mary GrandPré, Category: Books, Price: $17.99, Length: 784 pages">

What's changed? No duplication, more information, and everything is clearly tagged and separated. No real additional work is required to generate something of this quality: the price and length are the only new data, and they are already displayed on the site.

Programmatically generate descriptions
For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation.

Use quality descriptions
Finally, make sure your descriptions are... descriptive. It's easy to become lax on the quality of the meta descriptions, since they're not directly visible in the UI for your site's visitors. But meta descriptions might be displayed in Google search results -- if the description is high enough quality. A little extra work on your meta descriptions can go a long way towards showing a relevant snippet in search results. That's likely to improve the quality and quantity of your user traffic.