Matt Cutts was back for his tenth year at Pubcon and presented his “state of the index” for Google.
SEO in 2011 was about Panda and duplicate content. In 2012 there was an initiative in Penguin to cut down on spam. This past year has included:
Some other great stuff Google rolled out over the past year include:
Matt introduced a new tool to allow webmasters to disavow links. Hurray! How does it work? This is not “magic disavow fairy dust”. Do NOT use this tool unless you are sure you need to use it! You should first remove the links if you can first, because remember that other search engines also use this data but don’t have the disavow tool.
To use the tool, go to http://google.com/webmasters/tools/disavow-links-main. Then simply upload a text file into Webmaster Tools. Enter one URL per line or enter domain: followed by the domain to include all links from a particular domain. This tool is still in its early stages, though, so Matt warned that most webmasters and sites should NOT use the tool. Once the information is entered, it will likely take a few days or weeks for Google to recrawl the links. It will then annotate the link with your disavow info.
Matt also reminded the audience that, as SEOs, you should always be skeptical, especially of wild promises.
Eric Enge asked Matt if disavowing with the new tool is similar to nofollow. Matt said that essentially it does, but for the site on the other end of the inbound link. Also, nofollow drops links whereas diavow does not yet do that, as it’s still being tested.
Another audience member asked about how disavow works with a reconsideration request. Matt suggested that he should do a disavow request then allow 2-3 days before doing a reconsideration request, and mention that you submitted data via the disavow tool in that reconsideration request.
Will a disavow on YOUR site hurt you? Matt says not likely. But ideally, take down spam links if others request you to if you can.
Another audience member asked why Google can’t seem to always identify the originator of content versus scrapers. Matt recommended checking out analyzethis.ru to map how Google is doing with addressing duplicate content.
What is the timeframe in which duplicate content is assessed? Matt said that duplicate content is identified far upstream, when indexing is being done. Duplicate content, however, is less of a penalty and more that duplicate content just isn’t shown.